DaedTech

Stories about Software

By

Professional Code

About a year ago, I read this post in my feed reader and created a draft with a link to it and a little note to myself that said, “interesting subject.” Over the past weekend, I was going through old drafts that I’d never gotten around to finishing and looking to remedy the situation when I came across this one and decided to address it.

To be perfectly honest, I had no idea what I was going to write about a year ago. I can’t really even speculate. But I can talk a bit about what I think of now as professional code. Like Ayende and Trystan, I don’t think it’s a matter of following certain specific and abiding principles like SOLID as much as it is something else. They talk about professional code in terms of how quickly the code can be understood by maintainers since a professional should be able to understand what’s going on with the code and respond to the need to change. I like this assessment but generally categorize professionalism in code slightly differently. I think of it as the degree to which things that are rational for users to want or expect can be done easily.

To illustrate, I’ll start with a counter-example, lifted from my past and obfuscated a bit. A handful of people had written an application that centered around modifications to an XML file. The XML file and the business rules governing its contents were fairly complex, so it wasn’t a trivial application. The authors of this app had opted to prevent concurrent edits and race conditions by implementing an abstraction wherein the file was represented by a singleton class. Predictably, the design heavily depended on XmlFile.Instance.CallSomeMethod() style invocations.

One day, someone in the company expressed that it’d be a nice value-add to allow this application to show differences between incarnations of this XML file — a diff changes, if you will. When this idea was presented to the lead/architect of this code base, he scoffed and actually became sort of angry. Evidently, this was a crazy request. Why would ever want to do that? Inconceivable! And naturally, this was completely unfeasible without a rewrite of the application, and good luck getting that through.

If you’re looking for a nice ending to this story, you’re barking up the wrong tree. The person asking for this was humbled when it should have been the person with the inflexible design that was humbled. As a neutral observer, I was amazed at this exchange — but then again, I knew what the code looked like. The requester went away feeling dumb because the scoffer had a lot of organizational clout, so it was assumed that scoffing was appropriate. But I knew better.

What had really happened was that a questionable design decision (representing an XML file as a singleton instance) became calcified as a cornerstone assumption of the application. Then along came a user with a perfectly reasonable request, and the request was rebuffed because the system, as designed, simply couldn’t handle it. I think of this as equivalent to you calling up the contractor that built your house and asking him if he’d be able to paint your living room, and having him respond, “not the way I built your house.”

And that, to me, is unprofessional code. And, I don’t mean it in the sense that you often hear it when people are talking about childish or inappropriate behavior — I mean that it actually seems like amateur hour. The more frequently you tell your users that things that seem easy are actually really difficult, the less professional your code is going to seem. The reasoning is the same as with the example of a contractor that somehow built your house so that the walls couldn’t be painted. It represents a failure to understand and anticipate the way the systems you design tend to evolve and change in the wild, which is indicative of a lack of relevant professional experience. Would a seasoned, professional contractor fail to realize that most people want to paint the rooms in their houses sooner or later? Would a seasoned, professional software developer fail to realize that someone might want multiple instances of a file type?

Don’t get me wrong. I’m not saying that you’re a hack if there’s something that a user dreams up and thinks will be easy that you can’t do. There are certainly cases where something that seems easy won’t be easy, and it doesn’t mean that your design is bad or unprofessional. I’m talking about what I perceive to be a general, overarching trend. If changes to the software seem like they should be easy, then they probably should be easy. If you’ve added 20 different customer types to your system, it’d be weird if adding a 21st was extremely hard. If you currently support storing data in a database or to a file, it’d be weird if there was a particular record type that you couldn’t put in a file. If you have some concept of security and roles in your system, it’d be weird if adding a user required a re-deployment of your software.

According to the Clean Code videos by Bob Martin, a defining characteristic of good architecture is that it allows decisions to be deferred as long as possible. If the architecture is well designed, for instance, you should be able to write a lot of the code without knowing if it’s going to be a web app or desktop app or without knowing whether you’d use MySQL or PostgreSQL or MongoDB. I’d carry this a bit further and say that being able to anticipate what users might want and what they might change their minds about and then designing accordingly is the calling card of a writer of professional code.

By

Intro to Unit Testing 8: Test Suite Management and Build Integration

It’s been over a month now since my last post in this series, and for that I sort of apologize. I think I’ve been channelling all of my instructive energy into my now-finished Pluralsight course, leaving the blog largely for opinions, screeds, and a random hiring announcement. So, let’s get back on track and wrap this thing up. I have this post and another one slated and then we can call it a day.

So far, I’ve talked quite a lot about how and when (and when not) to write unit tests. I’ve offered up some techniques for helping you isolate the classes that you want to test, including the use of test doubles. And finally, I offered some advice on how to get people to leave you alone and let you write tests. So now I’d like to turn and offer some advice beyond just writing the things. You need to live with them, manage them and leverage them over the course of time.

Managing the Suite

You’ve built them. So, now what? At some point, you’ll wonder exactly when you’re getting started. For the first few or even few dozen classes you test, you’ll alternate between some exasperation at spending extra time doing something new and satisfaction at, well, doing something new. But then, at some point, you’ll be sitting around and notice that your test suite has like 400 tests and think, “wow, that’s a lot of code… do I really want all this?”

That feeling will hit you even harder when you go to change something under a tight deadline and your real quick change makes a test go red. You’re pretty sure the test is broken because it was testing the old way of doing things, so you really just want to comment out the test and you wonder why it’s such a pain to change the code. Why do you have to waste so much time to change one line of code?

The answer to these questions lies in practice but also effective test suite management. If you let the unit test suite become a boat anchor, it will drag you down. Your frustration will be real and reasonable, rather than just a temporary product of you being in a hurry and unfamiliar with working in a code base under test. You need to take care to prevent this from happening, and I’m going to tell you how in this section.

Name Your Tests Clearly and Be Wordy

When you’re writing a unit test, you’re looking at code. But when you’re running your test suite, you aren’t most of the time, and when you’re trying to understand why a run or a build failed, you’re never looking at code. When the test suite is failing, you don’t want to waste time figuring out why. And having to open the IDE, navigate to the test, read the code and figure out the problem is a waste of time.

Don’t give your test methods names like “Test24″ or “CustomerTest” or something. Instead, give them names like “Customer_IsValid_Returns_False_When_Customer_SocialSecurityNumber_Is_Empty”. That method name may seem ridiculous, especially if you’re used to giving methods short names, but trust me, you’ll be thankful for it. When your build is failing, which of these method names would you rather see an X next to? Would you rather be saying “looks like test 24 is failing,” or would you rather be saying, “oh, I wonder why someone made it so that an empty SSN is now considered valid?” If you say the first one, you’re lying.

This may seem unimportant in the scheme of things, but it’s the difference between associating frustration and confusion with your test suite and viewing it as a warning system for potentially undesirable changes. The test suite needs to be communicating clearly to you what’s wrong. Descriptive test names help do that and they help you identify whether it’s your code or the test itself that needs to be changed in the face of changing requirements.

Make Your Test Suite Fast

Ruthlessly delete and cull out slow tests. I can’t say it more plainly than that. A good test suite runs in seconds, max. If yours starts to take minutes, or God forbid, hours, then it’s rotting and becoming useless to you. Think of it this way — if it takes several minutes to run the test suite, how often are you going to do it? Every time you make a change, or just when you check in? If it takes hours, will you ever run it voluntarily?

If your test suite takes a long time to run, nobody will run it. Short feedback loops are of paramount importance to developers, and we optimize for efficiency. If the unit test suite is inefficient, we’ll find other ways to get feedback. As such, it is incredibly important to ensure that your test suite always runs quickly. Treat it as if the rest of your team were waiting for any legitimate excuse not to use the test suite, and don’t let inefficiency be that excuse.

Test Code is First Class Code

A common mistake that I see among those relatively new to testing is test code that’s something of a mess. The code will be brittle, heavily duplicated, weird, and hard to read. In short, your tests and test classes will contain code that you wouldn’t be caught dead putting into production.

Don’t do that. Treat your test code as if it were any other code. Eliminate duplication. Factor common functionality out into methods. Be descriptive with naming and with the flow of the method. Keep that code clean. I get that there’s a desire when it comes to testing to make as much of a mess as possible in the “bug bash” sense of throwing chaos at the situation and proving that your code can handle it, but the chaos needs to be controlled, and you can control it by keeping your test code clean and maintainable. If the tests are clean and easy to maintain, people won’t mind going in periodically to make an adjustment. If they’re unruly, people will get annoyed and comment them out or stop running them.

Have a Single Assertion per Test

This is a subtle one, but it also goes toward maintainability. If you start writing tests that have 20 asserts in them, you may feel good that you’re exercising a whole section of the code, but really you’re making things hard for yourself later. If all 20 tests pass (or at least the first 19), then all will be executed. But if the first one fails, none of the rest get executed. This means that in test methods with lots of asserts, it’s not always clear where they’re failing, which means it’s not always clear what’s going wrong.

In order for your test suite to be an asset, it has to be a clear indicator of what’s going wrong. Which would you find more useful in your car: a series of many different lights with helpful diagrams that lit up to indicate a problem, or one unlabeled red light that came on whenever anything at all was wrong? If you had that latter light and it could mean anything from your gas being low to you being out of wiper fluid to imminent destruction of your transmission, I bet you’d just start ignoring it after a while.

Don’t Share State Between Your Tests

There is no more surefire way to drive yourself insane at some future date than by storing some kind of application state among unit tests being executed. What I mean is if you have some test A that declares sets a global counter variable to 1, and then you have another test B that depends on the global counter being set to 1 in order for it to succeed, you are in for a world of hurt.

The problem is that there is no guarantee that the unit test runner will execute the tests in any particular order. What’s likely to happen is that your tests get executed in a particular order whenever you run them on your machine, so everything goes fine. But when the build machine runs them they fail. Weird. So you check them on your friend Bob’s machine, and they pass there. But on Alice’s machine, they fail. If you didn’t already know why this was happening because I just told you, can you imagine how much of your hair you’d pull out? You’d probably be checking the IDE version on those machines, compiler information, OS settings, and God only knows what else. It’d be a wild goose chase.

And imagine if it worked on everyone’s machine initially and then six months later started failing occasionally on the build machine. Machine isn’t the only failing dimension — there’s also time. So please, whatever you do, do not have your unit tests depend on the execution of a previous test. This practice, more than any other, is likely to lead to a rage-quitting of unit testing as a practice where you simply take all of them out of the build.

Encourage Others and Keep Them Invested

This sounds like a strange one to round out the section, but it’s important. If you’re the only one fighting the good fight with unit tests, it becomes daunting and exasperating. Everyone else’s reaction to failing tests is annoyance and they’re waiting for excuses just to stop altogether. You wind up feeling that you’re in an adversarial relationship with the team (I speak from experience here). But if you get others to buy in, you’re not shouldering the burden alone and you have help keeping the suite healthy and helpful.

Build Integration

When you first start out unit testing, the tests will be sort of disorganized and haphazard. You’ll write a few to get the hang of it and then maybe discard them. After a bit of that, you’ll start checking them into your solution (unless you’re an incorrigible weirdo or a liar). You do that, and the suite grows and, ideally, everyone is running it locally to keep things clean and be notified of potential breaking changes.

But you have to take it beyond that at some point if you want to realize the full value of the unit tests. They can’t just be a thing everyone remembers to do locally on pain of nagging emails or because someone will buy the team donuts or some other peer-pressure-oriented demerit system. Failing unit tests have to have real (read: automated) consequences. And the best way to do this is to make it so that failing unit tests mean a failing build.

If you’re in a shop that’s not as formal, this may be difficult at first. One handicap may be that you’re reading this and saying “what do you mean by ‘the build?'” If what you do is write code and take some kind of executable out of your project’s output directory on your machine and push it to a server or to your users, you’ve got some work to do before you think about integrating unit tests. You need a build. A build is an automated process by which your source code is turned into a production-ready, deployable package. And it’s automated in the sense that it doesn’t involve you hitting Ctrl-Shift-B or Ctrl-F6 or whatever you do manually in your IDE to build. The Build, with a capital B, is a process that checks your code out of source control, builds it, runs checks and whatever else is necessary, perhaps increments the versioning of the executables, etc., and then spits out the final product that will be pushed to a server or burned onto a DVD or whatever. If you want to read more about build tools, you can google around about TeamCity, CruiseControl, TFS, FinalBuilder, Jenkins etc. And you don’t have to use a product like that — you can create your own using shell scripts or code if you choose.

Because of all the different options when it comes to programming languages, unit test technologies and build tools, I’m not going to offer a tutorial on how to integrate unit tests into your build. To be comprehensive, I’d need to give dozens of such tutorials. But what I will say is that your integration is going to take the same basic format no matter what tools you’re using. The build is a series of steps that passes if everything goes smoothly and the deliverables are ultimately generated. If a step in the build fails, then the build itself fails. What you need to do is add a step that involves running the unit tests. With this in place, you’re creating a situation where any failing unit test means that the entire build fails.

Conceptually, this is pretty straightforward. Unit test runners can be run in command line fashion and they’ll generate a return value of some kind. So the build tool needs to examine the test runner’s output for an error code. If it finds one, it puts the brakes on the whole operation.

It may seem extreme at first to torpedo the whole build because of a failing unit test, but when you think about it, what else should possibly happen? Why would you want a process that allowed you to ship code knowing that it was defective in a way that it didn’t used to be? That’s amateur hour. And, what’s more is that if your team starts understanding that failed unit tests mean a failed build they’ll be sure to run the tests before check-in so that they don’t fail. It will become a natural part of your process, and the quality of your software will be dramatically improved for it.

By

Static Analysis, NDepend, and a Pluralsight Course

I absolutely love statistics. Not statistics as in the school subject — I don’t particularly love that branch of mathematics with its binomial distributions and standard deviations and whatnot. I once remarked to a friend in college that statistics-the-subject seemed like the ‘science’ of taking a guess and then rigorously figuring out how wrong you were. Flippant as that assessment may have been, statistics-the subject has hardly the elegant smoothness of calculus or the relentlessly logical pursuit of discrete math. Not that it isn’t interesting at all — to a math geek like me, it’s all good — but it just isn’t really tops on my list.

But what is fascinating to me is tabulating outcomes and gamification. I love watching various sporting events on television and keep track of odd things. When watching a basketball game, I always the amount of a “run” the teams are on before the announcers think to say something like “Chicago is on a 15-4 run over the last 6:33 this quarter.” I could have told you that. In football, if the quarterback is approaching a fist half passing record, I’m calculating the tally mentally after every play and keeping track. Heck, I regularly watch poker on television not because of the scintillating personalities at the tables but because I just like seeing what cards come out, what hands win, and whether the game is statistically normal or aberrant. This extends all the way back to my childhood when things like my standardized test scores and my class rank were dramatically altered by me learning that someone was keeping score and ranking them.

I’m not sure what it is that drives this personality quirk of mine, but you can imagine what happened some years back when I discovered static analysis and then NDepend. I was hooked. Before I understood what the Henderson Sellers Lack of Cohesion in Methods score was, I knew that I wanted mine to be lower than other people’s. For those of you not familiar, static analysis is a way to examine your code without actually executing it and seeing what happens retroactively. Static analysis, (over) simplified, is an activity that examines your source code and makes educated guesses about how it will behave at runtime and beyond (i.e. maintenance). NDepend is a tool that performs static analysis at a level and with an amount of detail that makes it, in my opinion, the best game in town.

After overcoming an initial pointless gamification impulse, I learned to harness it instead. I read up on every metric under the sun and started to understand what high and low scores correlated with in code bases. In other words, I studied properties of good code bases and bad code bases, as described by these metrics, and started to rely on my own extreme gamification tendencies in order to drive my work toward better code. It wasn’t just a matter of getting in the habit of limiting my methods to the absolute minimum in size or really thinking through the coupling in my code base. I started to learn when optimizing to improve one metric led to a decline in another — I learned lessons about design tradeoffs.

It was this behavior of seeking to prove myself via objective metrics that got me started, but it was the ability to ask and answer lots of questions about my code base that kept me coming back. I think that this is the real difference maker when it comes NDepend, at least for me. I can ask questions, and then I can visualize, chart and track the answer in just about every conceivable way. I have a “Moneyball” approach to code, and NDepend is like my version of the Jonah Hill character in that movie.

Because of my high opinion of this tool and its importance in the lives of developers, I made a Pluralsight course about it. If you have a subscription and have any interest in this subject at all, I invite you to check it out. If you’re not familiar with the subject, I’d say that if your interest in programming breaks toward architecture — if you’re an architect or an aspiring architect — you should also check it out. Static analysis will give you a huge leg up on your competition for architect roles, and my course will provide an introduction for getting started. If you don’t have a Pluralsight subscription, I highly recommend trying one out and/or getting one. This isn’t just a plug for me to sell a course I’ve made, either. I was a Pluralsight subscriber and fan before I ever became an author.

If you get a chance to check it out, I hope you enjoy.

By

Module Boundaries and Demeter

I was doing a code review recently, and I saw something like this:

public class SomeService
{
    public void Update(Customer customer)
    {
        //Do update stuff
    }

    public void Delete(int customerId)
    {
        //Do delete stuff
    }
}

What would you say if you saw code like this? Do you see any problem in the vein of consistent abstraction or API writing? It’s subtle, but it’s there (at least as far as I’m concerned).

The problem that I had with this was the mixed abstraction. Why do you pass a Customer object to Update and an integer to Delete? That’s fairly confusing until you look at the names of the variables. The method bodies are elided because they shouldn’t matter, but to understand the reason for the mixed abstraction you’d need to examine them. You’d need to see that the Update method uses all of the fields of the customer object to construct a SQL query and that the corresponding Delete method needs only an ID for its SQL query. But if you need to examine the methods of a class to understand the API, that’s not a good abstraction.

A better abstraction would be one that had a series of methods that all had the same level of specificity. That is, you’d have some kind of “Get” method that would return a Customer or a collection of Customers and then a series of mutator methods that would take a Customer or Customers as arguments. In other words, the methods of this class would all be of the form “get me a customer” or “do something to this customer.”

The only problem with this code review was that I had just explained the Law of Demeter to the person whose code I was reviewing. So this code:

public void DeleteCustomer(int customerId)
{
    string theSqlQuery = "DELETE FROM Customer WHERE CustomerId = " + customerId;
    //Do some sql stuff...
}

was preferable to this:

public void DeleteCustomer(Customer customer)
{
    string theSqlQuery = "DELETE FROM Customer WHERE CustomerId = " + customer.Id;
    //Do some sql stuff...
}

The reason is that you don’t want to accept an object as a method parameter if all you do with it is use one of its properties. You’re better off just asking for that property directly rather than taking a needless dependency on the containing object. So was I a hypocrite (or perhaps just indecisive)?

Well, the short answer is “yes.” I gave a general piece of advice one week and then gave another piece of advice that contradicted it the next. I didn’t do this, however, because of caprice. I did it because pithy phrases and rules fail to capture the nuance of architectural decisions. In this case the Law of Demeter is at odds with providing a consistent abstraction. And, I value the consistent abstraction more highly, particularly across a public seam between modules.

What I mean is, if SomeService were an implementation of a public interface called ICustomerService, what you’d have is a description of some methods that manipulate Customer. How do they do it? Who knows… not your problem. Is the customer in a database? Memory? A file? A web service? Again, as consumers of the API we don’t know and don’t care. So because we don’t know where and how the customers are stored, what sense would it make if the API demanded an integer ID? I mean, what if some implementations use a long? What if Customers are identified elsewhere by SSN for deletion purposes? The only way to be consistent across module boundaries (and thus generalities) is to deal exclusively in domain object concepts.

The Law of Demeter is called the Principle of Least Knowledge. At its (over) simplest, it is a dot counting exercise to see if you’re taking more dependencies than is strictly necessary. This can usually be enforced by asking yourself if your methods are using any objects that they could get by without using. However, in the case of public facing APIs and module boundaries, we have to relax the standard. Sure, the SQL Server version of this method may not need to know about the Customer, but what about any scheme for deleting customers? A narrow application of the Law of Demeter would have you throw Customer away, but you’d be missing out by doing this. The real question to ask in this situation is not “what is the minimum that I need to know” but rather “what is the minimum that a general implementation of what I’m doing might need to know.”

By

Code Generation Seems Like a Failure of Vision

I think that I’m probably going to take a good bit of flack for this post, but you can’t win ’em all. I’m interested in contrary opinions and arguments because my mind could be changed. Nevertheless, I’ve been unable to shake the feeling for months that code generation is just a basic and fundamental design failure. I’ve tried. I’ve thought about it in the shower and on the drive to work. I’ve thought about it while considering design approaches and even while using it (in the form of Entity Framework). And it just feels fundamentally icky. I can’t shake the feeling.

Let me start out with a small example that everyone can probably agree on. Let’s say that you’re writing some kind of GUI application with a bunch of rather similar windows. And let’s say that mostly what you do is take all of presentation logic for the previous window, copy, paste and adjust to taste for the next window. Oh noes! We’re violating the DRY principle with all of that repetition, right?

What we should be doing instead, obviously, is writing a program that duplicates the code more quickly. That way you can crank out more windows much faster and without the periodic fat-fingering that was happening when you did it manually. Duplication problem solved, right? Er, well, no. Duplication problem automated and made worse. After all, the problem with duplicate code is a problem of maintenance more than initial push. The thing that hurts is later when something about all of that duplicated code has to be changed and you have to go find and do it everywhere. I think most reading would agree that code generation is a poor solution to the problem of copy and paste programming. The good solution is a design that eliminates repetition and duplication of knowledge.

I feel as though a lot of code generation that I see is a prohibitive micro-optimization. The problem is “I have to do a lot of repetitive coding” and code generation solves this problem by saying, “we’ll automate that coding for you.” I’d rather see it solved by saying, “let’s step back and figure out a better approach — one in which repetition is unnecessary.” The automation approach puts a band-aid on the wound and charges ahead, risking infection.

For instance, take the concept of List in C#. List is conceptually similar to an array, but it automatically resizes, thus abstracting away an annoying detail of managing collections in languages from days gone by. I’m writing a program and I think I want an IntList, which is a list of integers. That’s going along swimmingly until I realize that I need to store some averages in there that might not be round numbers, so I copy the source code IntList to DoubleList and I do a “Find-And-Replace” with Int and Double. Maybe later I also do that with string, and then I think, “geez — someone should write a program that you just tell it a type and it generates a list type for it.” Someone does, and then life is good. And then, later, someone comes along with the concept of generics/templates and everyone feels pretty sheepish about their “ListGenerator” programs. Why? Because someone actually solved the core problem instead of coming up with obtuse, brute-force ways to alleviate the symptoms.

And when you pull back and think about the whole idea of code generation, it’s fairly Rube-Goldbergian. Let’s write some code that writes code. It makes me think of some stoner ‘brainstorming’ a money making idea:

Inve ntions

I realize that’s a touch of hyperbole, but think of what code generation involves. You’re going to feed code to a compiler and then run the compiled program which will generate code that you feed to the compiler, again, that will output a program. If you were to diagram that out with a flow chart and optimize it, what would you do? Would you get rid of the part where it went to the compiler twice and just write the program in the first place? (I should note that throughout this post I’ve been talking about this circular concept rather than, say, the way ASP or PHP generate HTML or the way Java compiles to bytecode — I’m talking about generating code at the same level of abstraction.)

The most obvious example I can think of is the aforementioned Entity Framework that I use right now. This is a framework utility that uses C# in conjunction with a markup language (T4) to generate text files that happen to be C# code. It does this because you have 100 tables in your database and you don’t want to write data transfer objects for all of them. So EF uses reflection and IQuerable with its EDMX to handle the querying aspect (which saves you from the fate we had for years of writing DAOs) while using code generation to give you OOP objects to represent your data tables. But really, isn’t this just another band-aid? Aren’t we really paying the price for not having a good solution to the Impedance Mismatch Problem?

I feel a whole host of code gen solutions is also born out of the desire to be more performant. We could write something that would look at a database table and generate, on the fly, using reflection, a CRUD form at runtime for that table. The performance would be poor, but we could do it. However, confronted with that performance, people often say, “if only there were a way to automate the stuff we want but to have the details sorted out at compile time rather than runtime.” At that point the battle is already won and the war already lost, because it’s only a matter of time until someone writes a program whose output is source code.

I’m not advocating a move away from code generating, nor am I impugning anyone for using it. This is post more in the same vein as ones that I’ve written before (about not using files for source code and avoiding using casts in object oriented languages). Code generation isn’t going anywhere anytime soon, and I know that I’m not even in a position to quit my reliance on it. I just think it’s time to recognize it as an inherently flawed band-aid rather than to celebrate it as a feat of engineering ingenuity.

By

Notes on Job Hopping Part 4: Free Agency

I’ve been very busy of late, so I had let this series slip a bit by the wayside. But I was taking a break from recording my Pluralsight course to sort of mindlessly read tweets when I noticed this one:

 

Juxtapositions as Outrage Factories

There is a fine line in life when it comes to juxtapositions. On one side of the line is genuine profundity, and on the other side is schmaltzy non sequitur and/or demagoguery. And I’d argue that the line is not entirely subjective, as it might seem. There is a Tupac Shakur lyric that is sort of raw and powerful that says, “they’ve got money for wars but can’t feed the poor.” If you’re a bleeding heart type, you’ll probably agree wholeheartedly. If you’re more of a pragmatist, you might say, “well, there are degrees of poverty and need, and what good is feeding people if some foreign entity is lobbing cruise missiles at them?” But you have to admit that he’s comparing apples to apples, so to speak. There is some finite pool of money, and some of that money is being spent on blowing one another up instead of feeding people that are hungry.

On the other side of this line of juxtaposition lie tiresome canards like “they can send a man to the moon but they can’t make a computer that doesn’t crash?!?” Yeah, because those are the same engineering problem. Or how about “a country can’t run a fiscal deficit because I don’t run my own budget that way at home with the groceries and the car payments!” Right, because macroeconomics is like your budget spreadsheet. And you also have the ability to print money to pay off your debts.

Back to the tweet. I think it sits right on the juxtaposition line. On the one hand, it seems to be sort of garden-variety line-employee populism (“pointy-haired bosses bad, knowledge workers good”) that ignores an important difference between athlete-coach and employee-manager — namely the inverted money and fame related power dynamic of athletes and coaches as compared to line employees and managers. LeBron James, not Erik Spoelstra, puts butts in seats and gets paid a king’s ransom, even if Spoelstra does theoretically call the shots. But it does raise an interesting question that isn’t the one that’s literally being asked (the answer to the tweet’s question being easy: managers make more money, have offices and get to boss people around).

Overvaluing Management

The interesting question that is raised is “why are line managers and other overhead personnel overvalued?” Let me say here that I’m not going to spend this post talking in much detail about that. I poured a lot of time and thought into it for the concluding part of my Expert Beginner E-Book, so I’ll summarize here by saying that there is a bit of a societal pyramid scheme that occurs when it comes to work. We collectively agree on a system where 20-somethings that are new to the workforce do all the grunt work for little pay and that, over the course of our careers, we accumulate vacation time, bigger salaries, larger desks and offices, and the ‘right’ to tell others what to do rather than doing it ourselves. We pat ourselves on the back for ‘earning’ it, but often that assessment is pretty questionable. It’s more a matter of waiting our turn.

Managers are overvalued in organizations because of a collective endorsement of the same kind of reasoning that drives social security — disproportionate dues paying now, followed by disproportionate dues receiving later. The entry level folks tolerate overvalued mid-level management either because they have no choice or because they someday want to be that overvalued mid-level management which is, of course, a fairly sweet deal once you get into the club.

But the overvaluation of management goes deeper than that and finds its real roots in the undervaluation of non-management. In other words, if the Product X Team, consisting of five knowledge workers and a line manager, delivers a spectacular and wildly profitable success, is this owed to the team or to the manager? I’d say that, in a very real way, this is comparable to sports teams in that a good manager will account for some variance and some wins here and there by managing egos, strategizing and motivating, but at the end of the day no one is going to manage a hopelessly underqualified team into serious contention. But, unlike sports teams, the spotlight in the corporate world often falls onto the manager because the manager is in a position to seize it and because it’s simply easier to give credit to the “leader” than to parcel it out in equal portions to the team.

So why do young athletes aspire to be LeBron James and not Eric Spoelstra while young academics want to be the boss rather than the inventor? Because Steve Jobs. Because Warren Buffet. Because bosses, like athletes, represent the competitive pinnacle. It’s really the same thing in the end — a desire to dominate. Athletes, CEOs and, to a lesser extent, line managers, have gotten to where they are by defeating foes in direct competition, and that’s appealing to children in an environment where peer competition is natural and amplified (grade school).

Positive Sum Makers

The overvaluation of managers (and athletes, actually) is a zero-sum outlook on the world. You become a success by competing against others and causing them to fail. There are winners and losers and the glory lies in being a winner in this game. But there is another avenue, which is that taken by who I’d call the Inventor or the Maker — the positive sum player. Makers (and I use this umbrella to describe the overwhelming majority of line-level programmers as well as engineers and other people who produce work product) shine by making the world a better place for all. They transmute their creativity and ambition into products and services that improve the standard of living and better people’s lives. There doesn’t need to be a loser in this game for the Maker to be a winner.

Make no mistake — there is certainly competition among Makers. But it is a healthy, positive sum competition. They compete against one another and themselves to invent great things, to do it quickly, and to do it well. I may want to be the best programmer on earth and that desire may drive me to some late nights hitting the books and cranking out code, but I can produce helpful things without “defeating” some other programmer in some kind of game or competition for position.

The interplay between Makers and what I’ll call Competitors has historically been an uneasy balance. In the time before widespread knowledge workers in corporations, you had the mad-scientist/inventor archetype as your Makers: the Edisons and Teslas of the world. Then with the technological growth of the 20th century, the Makers were kind of funneled into working as Organization Man where they went from being valued professionals in the 50s to eventually being Dilbert in more recent times — toiling under the inept stewardship of a pointy haired boss that also happened to be a marginally victorious Competitor.

Makers were in a difficult position, as they really just wanted to make. Working your way up the corporate ladder as Competitor requires stopping Making and engaging in zero sum gamesmanship that doesn’t interest that archetype, but not doing so meant obeying the micromanaging and often incompetent Competitors and having the fun sucked out of the act of making. Finding organizations that got this right has become so rare that places like Valve and Github are the stuff of legends. Historically, Makers could try going off on their own, but that meant giving up a lot of the Making as well, since they’d then have to worry about their own sales, marketing, accounting, etc.

But the Makers are going to have the last laugh.

“Developernomics”

For Makers that happen to be software developers, times are pretty good, and they’ve been pretty good for years. Most developers complain about how annoying it is that recruiters won’t stop calling them to try to get them to go to interviews for jobs that will pay them more money. Let me repeat that. Developers complain that companies won’t leave them alone when it comes to offering them work. The reason that this is happening is that the demand for programming is absolutely exploding as we transition into a world where the juggernaut of endless automation has finally lurched up to ramming speed and is methodically wiping out other types of jobs at an unbelievable rate. Quite simply, they days of any company not being a “software company” are drawing to a close, as described in an article entitled “Developernomics.

Flush with opportunities, developer job hopping is accelerating rapidly. Companies are ceasing to bother with asking developers why their resumes feature so much job hopping — they’re so hard up for programmers that they don’t bother to ask. More and more, developers are seizing on any excuse or any annoyance to fly the coop and go work on something else. This might be organizational stupidities, but it also might simply be something like “I’m tired of C# and want to try out Ruby for a while.” The mobility within the job market for developers is pretty much unprecedented for a Maker position.

In fact, it’s starting to look a lot more like another type of profession that is also not zero sum: high skill jobs for which there is virtually inelastic demand. Two that come to mind are doctors and lawyers. As long as the world has sick people it needs doctors, and as long as there are disputes (and lawyers making laws requiring lawyers for things), it needs lawyers (at least until the automation juggernaut automates both of these professions — I put an upper bound of 50 years on the full automation of the medical profession making human doctors obsolete). Both doctors and lawyers have a different sort of work model than most Competitors and Makers. They get a lot of education and then they start their own practices, band together in small groups, or join a large existing practice. But whichever option they choose, their affiliations tend to be very fluid and their ability to work for themselves almost a given, if they so choose.

Let’s stop for a second now. These high skill knowledge workers are highly in demand, have a great deal of fluidity in their working relationships and association, and often work for themselves. Doesn’t that sound familiar? Doesn’t it sound like software these days where some people freelance, some people bounce around among startups, and others just job hop between larger corporations to get themselves promoted and paid quickly? And doesn’t it seem like more and more developers are working at shops that are just software development consultancies? Isn’t it starting to seem like the question shouldn’t be “is job hopping okay,” but rather be “for how much longer are we going to bother with typical corporate jobs from which to hop?”

I think the handwriting is on the wall for the future of software development. I think that we’re careening toward a future where developers working for corporations for any length of time is such an anachronism that it isn’t considered a serious possibility. Developers aren’t going to job hop at all because they won’t have traditional corporate jobs. Due to increased globalization, networking, and interconnectivity, developers have sort of a de facto guild — their association with the global network of developers. Promotion, marketing, sales, and even some other aspects of managing one’s own consultancies are collectivized to a degree as developers have a network of friends to help them with such things. They often become moot points because who needs to bother with sales and marketing when business is banging down your door already?

So to return to the sports world and metaphor that started all of this, I’d say the future of developers doesn’t involve an ebb in “job hopping” but rather the opposite: a codified establishment of extreme job hopping as the status quo. Developers are going to become free agents like athletes who drift from team to team as their title aspirations and salary negotiations dictate. Like mercenary athletes, developers are not especially interested in things like culture, domain knowledge, corporate slogans and mission statements and all of that company man, corporate identity stuff. They’re interested in challenging projects, learning, making a few bucks and heading out to the next gig. In a previous post in this series, I had said the answer to the question “should I job hop” is “probably.” In the future, I think the answer to this question for developers will be another question: “uh, as opposed to what?”

By

I’m Hiring, So Here’s a Job Description

Let me start off by saying that I actually want to hire someone. This isn’t a drill nor is it some kind of hypothetical exercise. I have a real, actual need for .NET software developers and if you are interested in such a job and think that it could be a good fit, please let me know (more details to follow). I think that the pool of people that keeps up with software blogs is the best pool of people there is from which to draw candidates, and I’d go ahead and extend this to the people who run in your circles as well. If you care enough to follow blogs about software, you’re probably the kind of person with whom I’d like to work.

So, the first thing I did was to go and see what was going on at CareerBuilder, Dice, et al. so that I could get some ideas for writing up a job description. I was literally on the second one when I remembered how awkward and silly I find most job descriptions. This bad boy was what confronted me:

Jobs

I was reminded of Blazing Saddles.

Hedley: Qulifications?
Company: NoSQL, Communication Skills, Scala, Communication Skills
Hedley: You said “communication skills” twice.
Company: I like communication skills.

But notwithstanding the double-ups and rambling nature of this and the fact that probably only Horse Recruiter could write it, the ideal candidate has, by my quick count, 27 line items of 1 or more properties. That’s quite a wish list. I’d rather describe what I need like a human being talking to another human being. And, because I’d rather talk like a human being to other human beings, what I’ll start off by doing is describing what we have to offer first, and then what I’m asking in exchange. I think “what we offer” is far too frequently overlooked, and omitting it creates the impression that I think I’m on the Princeton admissions panel for undergraduates, rather than a guy on a team that needs to grow to handle the amount of work we have.

What We Are and What We Offer

Given that this is my personal blog and not affiliated with my 9-5 work, I’m not going to go into a lot of detail about the company, per se. I’ll supply that detail if requested, and you can always look at my Linked In profile — it’s no secret what I do or for whom. Instead, I’ll talk about the group and list out what I think makes it appealing.

We’re a small, agile team and a close knit group. We work exclusively in the .NET space, with a mix of database, application and client side code. Working with us, you would definitely touch Winforms, ASP Webforms, and ASP MVC. We are actively porting Winforms/Webforms functionality to an MVC application. Here are some benefits to working here, as I see them:

  • We follow Scrum (not Scrum-but — actual Scrum)
  • We have MSDN licenses, and we upgrade to the latest developer tools as they are released.
  • You will have creative freedom — we’re too busy for micromanagement.
  • There are definitely growth opportunities for those of you looking to go from developer to senior developer or senior developer to architect.
  • We have nice tooling for Visual Studio development, including NCrunch and CodeRush.
  • Everyone on the team gets a Pluralsight subscription because we believe in the value of personal growth and development.
  • Along the same lines, we have bi-weekly lunch and learns.
  • We have core hours and you can flex your schedule around them.
  • There is no bureaucracy here.  You will never have to write 6 design documents to be approved by 4 committees before you start writing useful code.
  • We practice continuous integration and deployment (the latter to a staging environment)
  • We are at a 10.5 on the Joel Test and climbing.
  • Not software related, but we’re located in Chicago within walking distance from a train stop.

Who Would Be a Good Fit?

Let me preface this with the important distinction that there is no “ideal software engineer” and there is no concept of “good enough” or anything silly like that. We need to ship code and do it quickly and well, and I’m going to look at a bunch of resumes and talk to a bunch of people and then I’m going to take a leap of faith on a person or couple of people that I think will best compliment the team in its goal of shipping code. That’s really all there is to it. You may be the best developer of all time, and I might get this process and wrong and whiff on you as a candidate, and it’s no shortcoming of yours if that happens. I will certainly try my best not to do that, however.

Senior Software Engineer

So, I will now describe the what I think would make someone the best fit for our group. First of all, I’ll describe what we’re looking for in terms of a more experienced, “senior level” candidate, and I’ll describe it quite simply in terms of what I’d be thrilled if I had someone reporting to me doing and doing well.

  • Contribute immediately to a set of different .NET solutions.
  • Can explain sophisticated concepts to less experienced team members, such as generics, lambda expressions, reflection, closures, etc.
  • Lead by example in terms of writing clean code and explaining principles of writing clean code to other developers.
  • Understand Software Craftsmanship principles like SOLID, unit testing/TDD, good abstractions, DRY, etc, well enough to explain to less experienced developers.
  • Versed in different architectural patterns, such as layered and “onion” architectures, as well as Domain Driven Design (DDD).
  • Taking responsibility for and improving our build, ALM setup, source control policies (TFS), and deployment procedures.  We have a good start, but there’s so much more we can do!
  • Can write ASP and the C# to support it in Webforms or MVC, but prefers MVC philosophically.
  • Understands client-server architecture and how to work within that paradigm well enough to explain it to developers that don’t.
  • Is comfortable with or can quickly get up to speed with REST web services and SOAP services, if the latter is necessary.
  • Is comfortable with SQL Server tables, schema, views, and stored procedures.
  • Knows and likes Entity Framework or is willing to pick it up quickly.

Software Engineer (Web)

The other profile of a candidate that I’ll describe is a software engineer with a web background. Again, same description context — if I had someone doing these things for me, I’d be very happy.

  • Come in and immediately start on work that’s piled up with CSS and client-side scripting (jQuery, Javascript) that improves the site experience for some of our sites.
  • Understands C# and server side coding well enough to implement run of the mill features with normal language constructs without asking for help.
  • Good at hunting down and isolating issues that may crop up when there are lots of moving parts.
  • Is aware of or is willing to learn about and explain various model binding paradigms both on the server (MVC) and client (various javascript frameworks).
  • Has a flare for design and could help occasionally with custom branding and the UX of websites.
  • Not a huge fan of Winforms, but willing to roll up your sleeves and get dirty if the team needs help with legacy desktop applications on some sprints.
  • Up for picking up and running with a more specialized skill to help out, such as working with T4 templates, customizing TFS workflows, experimenting with and leveraging something like Signal R, etc.
  • Some experience with or else interest in learning unit testing/TDD as well as clean coding/decoupling practices in general.

Logistical Details and Whatnot

So therein are the details of what I’m looking for. It seems like I’m flying in the face of billions of job ads and thousands of horse recruiters with this approach, and far better minds than mine have probably dedicated a lot of consideration to how to solicit candidates to come interview and how to put together the right set of questions about O notation runtime of quicksort and whatnot. But still, I feel like there’s something humanizing to this approach: this is what we have to offer, this is what would benefit our group, and perhaps we could work together.

In terms of location, I prefer to consider people local to the Chicagoland area or those who are willing to relocate here, but telecommute situations are not ipso facto dealbreakers. If you’re interested in one of these positions or someone you know is interested, please send me or have them send me an email at erik at daedtech, and we’ll take it from there. I’m not exactly sure when I’ll start interviewing, as there are still some internal details to hammer out, but I definitely wanted to give the blog-reading set of developers and people I know the first bite of the apple before I started going through the more common and frustrating channels.

And, also, in a more broad philosophical sense, I wanted to try to put my money where my mouth is a bit. After taking potshots in previous posts at job descriptions and interview processes, I thought it’d be the least I could do to put my own approach out there so as not to be the negative guy who identifies problems and offers no solutions.

By

Good Magic and Bad Magic

Not too long ago, I was working with a web development framework that I had inherited on a project, and I was struggling mightily with it to get it to work. The functionality was not discoverable at all, and it was provided almost exclusively via inheritance rather than composition. Runtime debugging was similarly fruitless, as a great deal of functionality was obfuscated via heavy use of reflection and a lot “squishing” of the application to fit into a forms over data paradigm (binding the GUI right to data sources, for instance). Generally, you would find some kind of prototype class/form to look at, try to adapt it to what you were doing and struggle for a few hours before reverse engineering the fact that you weren’t setting some random property defined in an ancestor class properly. Until you set this string property to “asdffdsa,” absolutely nothing would work. When you finally figured out the answer, the reaction wasn’t one of triumph but indignation. “Really?!? That’s what just ate the last two hours of my life?!?”

I remember a different sort of experience when I started working Web API. With that technology, I frequently found myself thinking things like “this seems like it should work,” and then, lo and behold, it did. In other words, I’d write a bit of code or give something a name that I figured would make sense in context, and things just kind of magically worked. It was a pretty heady feeling, and comparing these two experiences is a study in contrast.

One might say that this a matter of convention and configuration. After all, having to set some weird, non-discoverable string property is really configuration, and a lot of the newer web frameworks, Web API included, rely heavily on convention. But I think it goes beyond that and into the concepts that I’ll call “good and bad magic.” And the reason I say it’s not the same is that one could pretty easily establish non-intuitive conventions and have extremely clear, logical configurations.

When I talk about “magic,” I’m talking about things in a code base or application behind the scenes. This is “magic” in the sense that you can’t spell “automagically” without “magic.” In a MVC or Web API application, the routing conventions and ways that views and controllers are selected are magic. You create FooController and FooView in the right places, and suddenly, when you navigate to app/Foo, things just work. If you want to customize and change things, you can, but it’s not a battle. By default, it does what it seems like it ought to do. The dark side of is the application I described in the first paragraph — the one in which all of the other classes were working because of some obscure setting of a cryptically named property defined in a base class. When you define your own class, everything blows up because you’re not setting this property. It seems like a class should just work, but due to some hidden hocus-pocus, it actually doesn’t.

The reason I mention all this is to offer some advice based on my own experience. When you’re writing code for other developers (and you almost always are because, sooner or later, someone besides you will be maintaining your code or invoking it), think about whether your code hides some magic from others. This will most likely be the case if you’re writing framework or utility code, but it can always apply. If your code is, in fact, doing things that will seem magical to others, ask yourself if it’s good magic or bad magic. A good way to answer this question for yourself is to ask yourself how likely you think it will be that you’ll need to explain what’s going on to people that use your code. If you find yourself thinking, “oh, yeah, they’ll need some help — maybe even an instruction manual,” you’re forcing bad magic on them. If you find yourself thinking, “if they do industry standard things, they’ll figure it out,” you might be in good shape.

I say you might be in good shape because it’s possible you think others will understand it, but they won’t. This comes with practice, and good magic is hard. Bad magic is depressingly easy. So if you’re going to do some magic for your collaborators, make sure it’s good magic because no magic at all is better than bad magic.

Acknowledgements | Contact | About | Social Media