DaedTech

Stories about Software

By

A Developer Journal – Genius or Neurosis?

Many moons ago in my first role as a developer, I had very little real work to do for the first month or so on the job, so I occupied myself with poking around the company intranet, jotting down acronyms, figuring out who was responsible for what, and documenting all of this in spreadsheets and word documents. After a while, I setup a mediawiki installation and started making actual wiki pages out of all of these thoughts. Some time (and employers) after that, this practice caught on an bit and I found myself in a position where others started using and at least getting some value out of the wikis.

For the last couple of years now, I’ve also been blogging, and before that I was in a grad program where I wrote term papers, research papers, etc. Both of these activities are a bit more focused than knowledge dumps on a wiki, but they are also forms of chronicling my experiences. So, long story short, for the entirety of my career, I’ve been heavily documenting pretty much everything I do.

When I moved into my house, I found a bunch of memorabilia and personal keepsakes stuffed in the attic. In an attempt to figure out who they belonged to, I read through some journals that were there and found that they consisted of incredibly mundane chronicling of days – what the weather was like, time awake and asleep, grocery trips, etc. It is my hope that my own chronicling of my developer life is not quite as banal as this, but even if it is, c’est la vie I suppose. And who knows, perhaps the author of those journals needed this information for some purpose I couldn’t discern (tracking a medical condition, staying organized and focused, etc).

In honor of this mystery person in my attic and my own natural tendency over the course of time toward more and more documentation, I’ve decided to start my own “developer” journal and I’ve logged my first entries this week. The journal is just a Word document at the moment, so I’m getting back to basics from my previous ascent through Excel, Mediawiki and WordPress, but I think this is good. All of those recording forms have a tendency toward hierarchical or formal organization that I don’t really want here. This is like me jotting notes during meetings in a notebook, but with less “action item: give Bill the TPS reports” and more “I just spent an hour trying to figure out why my CSS file was triggering an error and it turned out to be unrelated problem X in reality”.

Here’s what I do so far. I spend a sentence or two describing what I worked on during various time windows throughout the day or if I switch tasks. Given that I do work where clients are billed for my time, it makes a lot of sense to document that for later when I’m filling out more formal accountings of my work (though I mainly use Grindstone for this because of its precision and UI, it’s also kind of nice to have it “backed up” in narrative form for context).

In addition to that bit of context, I make notes any time someone helps me with something, introduces me to something new, etc. After all, there’s nothing worse than when you ask someone how to do X, get distracted for a few minutes, go to do X, and realize you need to ask again. I try to avoid looking like an idiot whenever possible, even if it isn’t always easy. So assists, notes, code review suggestions, etc go in here too.

And finally, I have two other things that I do. In green italics, I insert “lessons learned”. This is something like “Lesson Learned: if you compile a WPF project in VS 2010 with a XAML file focused in the XAML editor, you’ll sometimes get spurious compiler errors.” So, this is a more crystallized form of notes in that it focuses on things that I’ll probably want to remember later. The other thing is concerns/observations/suggestions, and that gets orange italics. This is things like “I see a lot of duplication here in this code and that’s a code smell, but I don’t yet have enough context to speak authoritatively.” The orange will function as a way for me to keep track of things that I think could be improved (previously, I’ve always kept a spreadsheet somewhere called “suggested refactorings” or something like that. I color code these things because I feel like at some point later I may want to assemble them into a list.

So here’s my thinking with this. I like to write and document, as should be obvious from my blogging and other documenting activities. But, there’s a clear difference between putting together nice, composed presentations/posts/essays and simply recording every thought that makes its way into your brain. The developer journal is a way to get the best of both worlds. I can jot stuff down that I’m not sure but I think might be important or that I might want to remember later, but without boring people in a wiki/blog/etc if it turns out not to matter. I guess you could say I’m keeping the journal so that I can remember more of what I think while also applying a better filter.

Does anyone else do anything like this? If not (or if so), does this seem like a good idea, or does this just seem neurotic and weird? Would you do something like this? Please feel free to weigh in below in the comments.

By

Constructor Overloads: Know When to Say When

Paralysis By Options

Do you ever find yourself in a situation where some API or another requires you to instantiate an object? (If you’re reading this blog, the answer is probably “yes”). What do you usually do at this point? Instantiate it, compile, and make sure you’re good before poking around to see what your new object has to offer, usually in the form of auto-complete/intellisense? I think that’s what most would do. Word DOC APIs and other such things are all well and good as a backup plan, but let’s get serious – you want to play with the object and read the instructions only if you can’t figure out what to do. And the last thing you want to do is go reading the code of that class or, worse still, hunt down the guy that wrote it.

But, what about those times that the instantiation gets a little sidetracked? You go to instantiate the object and it’s like wandering into a Baskin Robbins knowing only that you vaguely feel like ice cream. So many flavors to choose from, but which is the right one?

In the picture above, I’ve decided I want an Aquarium object, and Intellisense informs me that there are no less than 11 ways that I can make this happen. That’s right, 11. My immediate, gut reaction to this information is to go off to implement the “AdoptADog” method instead and put this nonsense off until later.

But Aren’t More Choices Better?

With constructors, no, not really. I’ve talked before about the problem with bloated constructors and my opinion that a constructor should do nothing but ensure that the object initializes with class level variants established. With that in mind, either some of these overloads are doing more than is necessary or else some of them fail to meet this basic criteria. The former is pointless speculative coding and the latter means that your objects can be instantiated in states that are not valid. Either one of these is a problem.

I believe there is a tendency, especially if you don’t practice TDD or even write unit tests at all, to go off on tangents about how developers may want to instantiate objects. Maybe developer X will want to instantiate an aquarium with all defaults whereas developer Y will want to specify how many gallons it holds and how many fish are in it. Maybe developer Z just wants to initialize with the kind of rocks that go in the bottom or the kind of light that shines on top. Maybe everyone wants to initialize specifying salt or fresh water. Let’s think of every combination of things anyone may want to do to this object and offer them all up as constructor overloads, right?

But you know what? That’s what the public API is for with accessors and mutators. Everyone can do it that way. Save the constructor for things without which the aquarium makes no sense (e.g. capacity) and let everyone call a property setter or a mutator for the rest. C# even has some syntactic sugar for just this occasion.

If you add in a bunch of overloads, you may think that you’re being helpful, but you’re really just muddying the waters and paralyzing your clients with options. I may want to instantiate an aquarium and use it to hold a bunch of dirt from my back yard — so why I am I being offered all of these options about fish and water and aquarium plants and plastic divers? I don’t care about any of that. But, I’ll hesitate to omit it because for all I know I should instantiate the object with those things. I mean, with all of those overloads, some are probably vestigial or at least less frequently used. I don’t want to use something that might be deprecated or untested and nobody wants to maintain a bunch of methods that may never even be used.

In the end, what I’ll wind up doing is digging out the word document that describes this thing or going to the developer who wrote it and asking which one to use. And that sucks. If you offer me only one option — the minimal constructor that establishes the invariants and forces any critical dependencies on the client — I’ll use that option and go on my merry way. There will be nothing to think about and certainly nothing to read word documents or send emails about. And that is the essence of providing usable code and good abstractions.

(And incidentally, since Visual Studio 2010, C# has really taken away any good excuse for a lot of overloads with optional/default parameters).

By

How Rational Clear Case Stole my Innocence and Nearly Ruined my Life

A Simpler Time

When I was in college in the late 90’s, we didn’t use source control. The concept existed but wasn’t pervasive in my environment. (Or, if it was, I have no recollection of using it). And, why use it? We were working projects lasting less than a semester (usually far less) and in teams of 3 or fewer for the most part. We also generally telnet-ed into servers and stored most of our source there since most students had windows machines and most of our assignments required *NIX, meaning that the “backup/persistence” component of source control was already taken care of for us. We were young and carefree.

After I left college and started working, the need for source control was explained to me and I was introduced to Visual Source Safe, a product so bad that even the company that made it didn’t use it for source control. Still, it was better than no source control. If I messed things up badly I could always go back to a sane starting point and life would be good. This was no perfect solution, but I began to see the benefits of backed up work and concurrent edit management in a way that I never had in school. I was moving up in the world and making some mistakes, but the good, honest kind that spurred maturity and growth.

As the 2000’s and my development projects went on, I was exposed to CVS and then SVN. This was a golden age of tooling for me. I could merge files and create branches. Rolling back to previous versions of software was possible as was switching to some speculative sandbox. It was easier to lead projects and/or scale them up to more developers. It even became possible to ‘rescue’ and improve old legacy projects by adding this kind of tooling. The source control schemes didn’t just become part of my subconscious — they were a pleasure to use. The sky was the limit and I felt that my potential was boundless. People expected big things from me and I was not letting them down as my career progressed from junior developer to seasoned programmer to leader.

The Gathering Storm

But storm clouds were gathering on the horizon, even if I didn’t realize it yet. In just a matter of a few short years following my promising career start, everything would change. My coding style grew sloppy and haphazard. The experimentation and tinkering that had previously defined me went by the wayside. My view on programming and software in general went from enthusiastic to apathetic to downright nihilistic. I became lazy and negatively superstitious. I had fallen in with the wrong crowd; I had started using Rational Clear Case.

It started harmlessly enough. I was introduced to the tool on a long term project and I remember thinking “wow, cool – this separates backup from change merging”, and that was the sweet side it showed to sucker me in. But, I didn’t see it that way at the time with my unbridled optimism and sunny outlook on life. The first warning sign should have been how incredibly complicated it was to set up and how difficult it was to use, but I just wanted to fit in and not seem stupid and lame, so I ignored it.

The first thing to suffer was good coding practice. With Rational Clear Case, it isn’t cool to do things like add files to source control or rename existing files. That’s for nerds and squares. With Clear Case, you add classes to existing files and keep filenames long after they make sense. If you don’t, it doesn’t go well for you. So, my class sizes tended to grow and my names tended to rot as the code changed. Correctness, brevity and the single responsibility principle weren’t worth looking dumb in front of Clear Case, and besides, if you cross it, it gets really angry and stops working for hours or even days. Even formatting and boy-scout changes weren’t worth it because of the extreme verbosity in the version tree and spurious merge conflicts that might result. Better to touch as little code as humanly possible.

The next thing to go was my interest in playing with code and experimenting. VPN connection to Clear Case was impossibly slow, so the days of logging in from home at oddball hours to implement a solution that popped into my head were over. Clear Case would also get extremely angry if I tried to sandbox a solution in another directory, using its View.dat file to create all kinds of havoc. But it was fine, I told myself — working after hours and learning through experimentation aren’t things that cool kids do.

And, where I previously thought that the world was a basically good place, filled with tools that work dependably and helpfully, Clear Case soon showed me how wrong I was. It exposed me to a world where checkins randomly failed and even crashed the machine – a world where something called the ALBD License server having a problem could make it so that you didn’t have to (in fact couldn’t) write code for a day or two. My eyes were opened to a world where nothing can be trusted and no one even knows what’s real and what isn’t. I came to question the very purpose of doing work in the first place, since files sometimes just disappear. The only thing that made sense was to do the bare minimum to get by, and maybe not even that. Clear Case never tried to drown me in stupid, idealistic fantasies like source control that works and tools that don’t radically hamper your productivity — it was the only thing in my life that told me the truth.

Or, so I thought.

Redemption

As it turned out, I had strayed — been led astray — from the path of good software development, and my closest friends and family finally staged an intervention with me to show me the kind of programmer that Clear Case was turning me into. I denied and fought against it at first, but realized that they were right. I was on the path to being a Net Negative Producing Programmer (NNPP) or washing out of the industry altogether.

At first I thought that I’d have a gradual break with Clear Case, but I soon realized that would be impossible. I had to cut all ties with it and begin a new life, re-focused on my developer dreams and being a productive member of that community. While it seemed hard at first, I’ve never looked back. And while I’ll never regain my pre-Clear-Case innocence and youthful exuberance, my life is back on track. I am once again productive, optimistic, and happy.

What’s Wrong With You, Erik?

Okay, so that may have been a little After School Special-ish. And, nobody actually had an intervention with me; I actually just worked on projects where I used different source control. And I never actually stopped working at night, factoring my classes, giving good names, etc. So why did I write all of this?

Because Clear Case made me want to stop doing all of those things. It made many, many good practices painful to do while creating a path of least resistance right through a number of terrible practices. It encouraged sloppiness and laziness while discouraging productivity and creativity, and that’s a problem.

This blog isn’t about product reviews and gripes of this nature, so it isn’t specifically my intention to dump on Clear Case (though if ever a tool deserved it…). Rather, the point here is that it’s important to evaluate the tooling that you’re using to do your work. Don’t just get used to whatever is thrown at you – constantly evaluate it to see if it meets your needs and continues to do so as time goes on. There is something to be said for familiarity with and mastery of a tool making you productive, but if you’ve mastered a crappy tool, you’re probably at a local maximum and you need to go exploring outside of your comfort zone.

Subtle Signs That You’re Using Bad Tooling

I’m going to phrase this in the negative, since I think most people have a pretty reasonable concept of good tooling. That is, if something makes you much more productive/happy/etc, you’re going to notice, so this is really about the difference between adequate tooling and bad tooling. Most people recognize bad tooling when it simply doesn’t work, crashes a lot, etc, but many will struggle to recognize it when it kinda works, you know, most of the time, sorta. So here are subtle signs that your tool is bad.

  1. You design process kludges around it (e.g. well, our IDE won’t color code methods, so we name them all Methodxxxxx()).
  2. You personify/anthropomorphize it in a negative way (e.g. Clear Case doesn’t like it when you try to rename a file).
  3. You’ll cut out ten minutes early at the end of the day specifically to avoid having to use it.
  4. You google for help on it and *crickets*.
  5. Developers on your team re-implement components of it rather than using it.
  6. You make excuses when explaining your usage of it.
  7. Bringing a new user up to speed on your process with it takes a long time and causes them to look at you disbelievingly or sadly.
  8. People don’t use it unless forced or people attempt to use other tools instead.
  9. You google the product and find more angry rants or posts like this one than helpful sites and blog how-tos.
  10. People on your team spend time solving the tool instead of using the tool to solve business problems.
  11. You think about it a lot when you’re using it.

So When is Tooling Good?

Apart from the obvious shouting for joy when using it and whatnot, there is a subtlety to this as well, but I think it’s mainly tied to item (11). A good tool is one that you don’t think about when using. For instance, I love Notepad++. I use it daily and quite probably hourly for a wide variety of tasks since it is my goto text editor. But the only time I ever really think about it is when I’m on a machine where it isn’t installed, and I get stuck with the regular Notepad when opening a text file. Notepad++ and its use are so second nature to me that I hardly ever think about it (with the obvious exception of when I might want to learn more about it or explore features).

If you take this advice to heart and want to constantly reassess your tooling, I’d say the single best measure is to see how frequently or infrequently you notice the tool. All of the other symptom of a bad tool bullet points are certainly relevant, but most of them are really fruit of the tree of (11). If you’re creating kludges for, making excuses about, googling or personifying a tool, the common thread is that you’re thinking about it. If, on the other hand, the tool kind of fades into the background of your daily life and allows (and helps) you to focus on other problems, it is helping you, and it is a good tool.

So don’t let Clear Case or anything else steal your innocence or ruin your life; don’t tolerate a tool that constantly forces you to think about it as you battle it. Life is too short.

By

How To Keep Your Best Programmers

Getting Philosophical

Given that I’ve just changed jobs, it isn’t entirely surprising that I’ve had a lot of conversations recently about why I decided to do so. Generally when someone leaves a job, coworkers, managers, HR personnel, friends, and family are all interested in knowing why. Personally, I tend to give unsatisfying answers to this question, such as, “I wanted a better opportunity for career advancement,” or, “I just thought it was time for a change.” This is the corporate equivalent of “it’s not you–it’s me.” When I give this sort of answer, I’m not being diplomatic or evasive. I give the answer because I don’t really know, exactly.

Don’t get me wrong. There are always organizational gripes or annoyances anywhere you go (or depart from), and it’s always possible that someone will come along and say, “How would you like to make twice as much money doing the coolest work imaginable while working from home in your pajamas?” or that your current employer will say, “We’re going to halve your pay, force you to do horrible grunt work, and send you to Antarctica to do it.” It is certainly possible that I could have a specific reason for leaving, but that seems more the exception than the rule.

As a general practice, I like to examine my own motivations for things that I do. I think this is a good check to make sure that I’m being rational rather than impulsive or childish. So I applied this practice to my decision to move on and the result is the following post. Please note that this is a foreword explaining what got me thinking along these lines, and I generalized my opinion on my situation to the larger pool of software developers. That is, I’m not intending to say, “I’m the best and here’s how someone can keep me.” I consider my own programming talent level irrelevant to the post and prefer to think of myself as a competent and productive developer, distinguished by enthusiasm for learning and pride in my work. I don’t view myself as a “rock star,” and I generally view such prima donna self-evaluation to be counterproductive and silly.

What Others Think

Some of my favorite blog posts that I’ve read in the last several years focus on the subject of developer turnover, and I think that these provide an excellent backdrop for this subject. The oldest one that I’ll list, by Bruce Webster, is called “The Wetware Crisis: the Dead Sea Effect,” and it coins an excellent term for a phenomenon with which we’re all probably vaguely aware on either a conscious or subconscious level. The “Dead Sea Effect” is a description of some organizations’ tendency to be so focused on retention that they inadvertently retain mediocre talent while driving better talent away:

…what happens is that the more talented and effective IT engineers are the ones most likely to leave — to evaporate, if you will. They are the ones least likely to put up with the frequent stupidities and workplace problems that plague large organizations; they are also the ones most likely to have other opportunities that they can readily move to.

What tends to remain behind is the ‘residue’ — the least talented and effective IT engineers. They tend to be grateful they have a job and make fewer demands on management; even if they find the workplace unpleasant, they are the least likely to be able to find a job elsewhere. They tend to entrench themselves, becoming maintenance experts on critical systems, assuming responsibilities that no one else wants so that the organization can’t afford to let them go.

Bruce describes a paradigm in which the reason for talented people leaving will frequently be that they are tired of less talented people in positions of relative (and by default) authority telling them to do things–things that are “frequent stupidities.” There is an actual inversion of the pecking order found in meritocracies, and this leads to a dysfunctional situation that the talented either avoid or else look to escape as quickly as possible.

Bruce’s post was largely an organizational perspective; he talked about why a lot of organizations wind up with an entrenched group of mediocre senior developers, principals, and managers without touching much on the motivation for the talented to leave beyond the “frequent stupidities” comment. Alex Papadimoulis from the Daily WTF elaborates on the motivation of the talented to leave:

In virtually every job, there is a peak in the overall value (the ratio of productivity to cost) that an employee brings to his company. I call this the Value Apex.

On the first minute of the first day, an employee’s value is effectively zero. As that employee becomes acquainted with his new environment and begins to apply his skills and past experiences, his value quickly grows. This growth continues exponentially while the employee masters the business domain and shares his ideas with coworkers and management.

However, once an employee shares all of his external knowledge, learns all that there is to know about the business, and applies all of his past experiences, the growth stops. That employee, in that particular job, has become all that he can be. He has reached the value apex.

If that employee continues to work in the same job, his value will start to decline. What was once “fresh new ideas that we can’t implement today” become “the same old boring suggestions that we’re never going to do”. Prior solutions to similar problems are greeted with “yeah, we worked on that project, too” or simply dismissed as “that was five years ago, and we’ve all heard the story.” This leads towards a loss of self actualization which ends up chipping away at motivation.

Skilled developers understand this. Crossing the value apex often triggers an innate “probably time for me to move on” feeling and, after a while, leads towards inevitable resentment and an overall dislike of the job. Nothing – not even a team of on-site masseuses – can assuage this loss.

On the other hand, the unskilled tend to have a slightly different curve: Value Convergence. They eventually settle into a position of mediocrity and stay there indefinitely. The only reason their value does not decrease is because the vast amount of institutional knowledge they hoard and create.

This is a little more nuanced and interesting than the simple meritocracy inversion causing the departure of skilled developers. Alex’s explanation suggests that top programmers are only happy in jobs that provide value to them and jobs to which they provide increasing value. The best and brightest not only want to grow but also to feel that they are increasingly useful and valuable–indicative, I believe, of pride in one’s work.

In an article written a few years later titled “Bored People Quit,” Michael Lopp argues that boredom is the precursor to developers leaving:

As I’ve reflected on the regrettable departures of folks I’ve managed, hindsight allows me to point to the moment the person changed. Whether it was a detected subtle change or an outright declaration of their boredom, there was a clear sign that the work sitting in front of them was no longer interesting. And I ignored my observation. I assumed it was insignificant. He’s having a bad day. I assumed things would just get better. In reality, the boredom was a seed. What was “I’m bored” grew roots and became “I’m bored and why isn’t anyone doing anything about it?” and sprouted “I’m bored, I told my boss, and he… did nothing,” and finally bloomed into “I don’t want to work at a place where they don’t care if I’m bored.”

I think of boredom as a clock. Every second that someone on my team is bored, a second passes on this clock. After some aggregated amount of seconds that varies for every person, they look at the time, throw up their arms, and quit.

This theme of motivation focuses more on Alex’s “value provided to the employee” than “value that employee provides,” but it could certainly be argued that it includes both. Boredom implies that the developer gets little out of the task and that the perceived value that he or she is providing is low. But, beyond “value apex” considerations, bored developers have the more mundane problem of not being engaged or enjoying their work on a day to day basis.

What’s the Common Thread?

I’m going to discount obvious reasons for leaving, such as hostile work environment, below-market pay, reduction of benefits/salary, etc., as no-brainers and focus on things that drive talented developers away. So far, we’ve seen some very compelling words from a handful of people that roughly outline three motivations for departure:

  • Frustration with the inversion of meritocracy (“organization stupidities”)
  • Diminishing returns in mutual value of the work between programmer and organization
  • Simple boredom

To this list I’m going to add a few more things that were either implied in the articles above or that I’ve experienced myself or heard from coworkers:

  • Perception that current project is futile/destined for failure accompanied by organizational powerlessness to stop it
  • Lack of a mentor or anyone from whom much learning was possible
  • Promotions a matter of time rather than merit
  • No obvious path to advancement
  • Fear of being pigeon-holed into unmarketable technology
  • Red-tape organizational bureaucracy mutes positive impact that anyone can have
  • Lack of creative freedom and creative control (aka “micromanaging”)
  • Basic philosophical differences with majority of coworkers

Looking at this list, a number of these are specific instances of the points made by Bruce, Alex and Michael, so they aren’t necessarily advancements of the topic per se, though you might nod along with them and want to add some of your own to the list (and if you have some you want to add, feel free to comment). But where things get a little more interesting is that pretty much all of them, including the ones from the linked articles, fall into a desire for autonomy, mastery, or purpose. For some background, check out this video from RSA Animate. The video is great watching, but if you haven’t the time, the gist of it is that humans are not motivated economically toward self-actualization (as widely believed) but are instead driven by these three motivating factors: the desire to control one’s own work, the desire to get better at things, and the desire to work toward some goal beyond showing up for 40 hours per week and collecting a paycheck.

Frustration with organizational stupidity is usually the result of a lack of autonomy and the perception of no discernible purpose. Alex’s value apex is reached when mastery and purpose wane as motivations, and boredom with a job can quite certainly result from a lack of any of the three RSA needs being met. But rather than sum up the symptoms with these three motivating factors, I’m going to roll it all into one. You can keep your good developers by making sure they have a compelling narrative as employees.

Guaranteeing the Narrative

Bad or mediocre developers are those who are generally resigned or checked out. They often have no desire for mastery, no sense of purpose, and no interest in autonomy because they’ve given up on those things as real possibilities and have essentially struck a bad economic bargain with the organization, pay amount notwithstanding. That is, they give up on self-actualization in exchange for a company paying a mortgage, a few car payments, and a set of utilities for them. I’ve heard a friend of mine call this “golden handcuffs.” They have a pre-defined narrative at work: “I work for this company because repo-men will eventually show up if I don’t.” These aren’t necessarily bad or unproductive employees, but they’re pretty unlikely to be your best and brightest, and you can be assured that they will tend to put forth the minimum amount of effort necessary to hold up their end of the bad bargain.

These workers are easy to keep because that is their default state of affairs. Going out and finding another job is not the minimum effort required to pay the bills, so they won’t do it. They are Bruce’s “residue” and they will tend to stick around and earn obligatory promotions and pay increases by default, and, unchecked, they will eventually sabotage the RSA needs of other, newer developers on the team and thus either convert them or drive them off. The narrative that you offer them is, “Stick around, and every five years we’ll give you a promotion and a silver-plated watch.” They take it, considering the promotion and the watch to be gravy.

But when you offer that same narrative to ambitious, passionate, and talented developers, they leave. They grow bored, and bored people quit. They refuse to tolerate that organizational stupidity, and they evaporate. They look for “up or out,” and, realizing that “out” is much quicker and more appealing, they change their narrative on their own to “So long, suckers!”

You need to offer your talented developers a more appealing narrative if you want them to stay. Make sure that you take them aside and reaffirm that narrative to them frequently. And make sure the narrative is deterministic in that their own actions allow them to move toward one of the goals. Here are some narratives that might keep developers around:

  • “If you implement feature X on or ahead of schedule, we will promote you.”
  • “With the work that we’re giving you over the next few months, you’re going to become the foremost NoSQL expert in our organization.”
  • “We recognize that you have a lot of respect for Bob’s Ruby work, so we’re putting you on a project with him to serve as your mentor so that you can learn from him and get to his level.”
  • “We’re building an accounting package that’s critical to our business, and you are going to be solely responsible for the security and logging portions of it.”
  • “If your work on project Y keeps going well, we’re going to allow you to choose your next assignment based on which language you’re most interested in using/learning.”

Notice that these narratives all appeal to autonomy/mastery/purpose in various ways. Rather than dangling financial or power incentives in front of the developers, the incentives are all things like career advancement/recognition, increased autonomy, opportunities to learn and practice new things, the feeling of satisfaction you get from knowing that your work matters, etc.

And once you’ve given them some narratives, ask them what they want their own to be. In other words, “we’ll give you more responsibility for doing a good job” is a good narrative, but it may not be the one that the developer in question envisions. It may not always be possible to give the person exactly what he or she wants, but at least knowing what it is may lead to attractive compromises or alternate ideas. A new team member who says, “I want to be the department’s principal architect” may have his head in the clouds a bit, but you might be able to find a small, one-man project and say, “start by architecting this and we’ll take it from there.”

At any point, both you and the developers on your team should know their narratives. This ensures that they aren’t just periodic, feel-good measures–Michael’s “diving saves”–but constant points of job satisfaction and purpose. The developers’ employment is a constant journey that’s going somewhere, rather than a Sisyphean situation where they’re running out the clock until retirement. With this approach, you might even find that you can coax a narrative out of some “residue” employees and reignite some interest and productivity. Or perhaps defining a narrative will lead you both to realize that they are “residue” because they’ve been miscast in the first place and there are more suitable things than programming they could be doing.

Conclusion

The narratives that you define may not be perfect, but they’ll at least be a start. Don’t omit them, don’t let them atrophy and, whatever you do, don’t let an inverted meritocracy–the “residue”–interfere with the narrative of a rising star or top performer. That will catapult your group into a vicious feedback loop. Work on the narratives with the developers and refine them over the course of time. Get feedback on how the narratives are progressing and update them as needed.

Alex thinks that departure from organizations is inevitable, and that may be true, but I don’t know that I fully agree. I think that as long as talented employees have a narrative and some aspirations, their value apex need not level off. This is especially true at, say, consulting firms where new domains and ad-hoc organization models are the norm rather than the exception. But what I would take from Alex’s post is the perhaps radical idea that it is okay if the talented developer narrative doesn’t necessarily involve the company in five or ten years. That’s fine. It allows for replacement planning and general, mutual growth. Whatever the narrative may be, mark progress toward it, refine it, and make sure that your developers are working with and toward autonomy, mastery, and purpose.

…….

Edit: For anyone interested, there is an E-Book of the (edited for E-Book format) contents of this post. This is the publisher’s website, which has links to all of the book stores in which it appears. Or, here are links to it for Amazon, Barnes and Noble, and iTunes. The price is the lowest one for each store, which is $0.99 in all but iTunes, where it’s free.

By

Connecting to TFS Server with Different Credentials

Hello, all. My apologies for the unannounced posting hiatus. I’ve recently started a new employment venture and I was also on vacation last week, touring the Virgina/Pennsylvania/DC area. Going forward, I’m going to be doing more web-related stuff and probably be a little more of a jack of all trades, so stay tuned for posts along those lines.

Today, I’m going to post under “lessons learned” for getting rid of an annoyance. Every once in awhile I have occasion to connect to a TFS Server using different credentials from those with which I have logged in. Whenever I do this, I am prompted for credentials when connecting to source control, which can be fairly annoying. Well, thanks to Mark Smith for a recent tip for how to avoid this.

In Windows 7/Server, go to the Control Panel and choose the “Credential Manager”. In a strange quirk, “Credential Manager” isn’t actually visible in the default control panel view, and so you have to click “View By” and select something other than “Category”. Once you’ve done this, you should see the Credential Manager.

In Credential Manager, go to “Add a Windows Credential” and enter the computer name, along with your login credentials for it. You’ll probably want to include the domain, so your username will be YOURDOMAIN\YOURUSERNAME. The domain isn’t strictly necessary if both logins are on the same domain, but I think a common scenario is you’re logged in to the local machine and connecting to a TFS server on a domain somewhere.

Once you’re done, you might need to restart Visual Studio. (Truthfully, I don’t know because I had already closed it when I was doing this).

Richard Banks has posted this same process with screenshots (minus the bit about Credential Manager not showing up by default).

And, that’s it. Spend 30 seconds doing it and save yourself daily or even more frequent annoyance from here forward. Cheers!

Acknowledgements | Contact | About | Social Media