DaedTech

Stories about Software

By

How to De-Brilliant Your Code

Three weeks, three reader questions.  I daresay that I’m on a roll.  This week’s question asks about what I’ll refer to as “how to de-brilliant your code.”  It was a response to this post, in which I offered the idea of a distinction between maintainable code and common code.  The lead-in premise was that of supposedly “brilliant” code, which was code that was written by an ostensible genius and that no one but said genius could understand.  (My argument was/is that this is usually just bad code written by a self-important expert beginner).

The question is, as follows, verbatim.

In your opinion, what is the best approach to identify que “brilliant” ones with hard code, to later work on turn brilliant to common?

Would be code review the best? Pair programming (seniors could felt challenged…)?

Now, please forgive me if I get this wrong, but because of the use of “que” where an English speaker might say “which”, I’m going to infer that the question submitter speaks Spanish as a first language.  I believe the question is asking after the best way to identify and remediate pockets of ‘brilliant’ code.  But, because of the ambiguity of “ones” it could also be asking about identifying humans that tend to write ‘brilliant’.  Because of this, I’ll just go ahead and address both.

Einstein Thinking Public Static Void

Find the Brilliant Code

First up is identifying brilliant code, which shouldn’t be terribly hard.  You could gather a quorum of your team together and see if there are pockets of code that no one really understands, or else you could remove the anchoring bias of being in a group by having everyone assess the code independently.  In either case, a bunch of “uh, I don’t get it” probably indicates ‘brilliant’ code.  The group aspect of this also serves (probably) to prevent against an individual not understanding simply by virtue of being too much of a language novice (“what’s that ++ after the i variable,” for instance, indicates the problem is with the beholder rather than the original developer).

But, even better, ask people to take turns explaining what methods do.  If people flounder or if they disagree, then they obviously don’t get it, self-reporting notwithstanding.  And having team members not understanding pockets of code is an ipso facto problem.

An interesting side note at this point is whether this illegible code is “brilliant” or “utter spaghetti” is going to depend a lot more on knowledge of who wrote it than anything else.  “Oh, Alice wrote that — it’s probably just too sophisticated for our dull brains.  Oh, wait, you were reading the wrong commit, and it’s actually Bob’s code?  Bob’s an idiot — that’s just bad code.”

De-Brilliant The Codebase

Having identified the target code for de-brillianting, flag it somewhere for refactoring: Jira, TFS, that spreadsheet your team uses, whatever.  Just make a note and queue it as work — don’t just dive in and start mucking around in production code, unless that’s a team norm and you have heavy test coverage.  Absent these things, you’re creating risk without buy in.

Leave these things in the backlog for prioritization on the “eventually” pile, but with one caveat.  If you need to be touching that code for some other reason, employ the boy-scout rule and de-brilliant it, as long as you’re already in there.  First, though, put some characterization tests around it, so that you have a chance to know if you’re breaking anything.  Then, do what you need to and make the code easy to read; when you’re done, the same, “tell me what this does” should be easy to answer for your teammates.

De-brillianting the codebase is something that you’ll have to chip away at over the course of time.

De-Brilliant The Humans

I would include a blurb on how to find the humans, but that should be pretty straightforward — find the brilliant code and look at the commit history.  You might even be able to tell simply from behavior.  People that talk about using 4 design patterns on a feature or cramming 12 statements into a loop condition are prime candidates.

The trick isn’t in finding these folks, but in convincing them to stop it.  And that is both simple to understand and hard to do.

During my undergrad CS major many years ago, I took an intro course in C++.  At one point, we had to do a series of pretty mundane, review exercises that would be graded automatically by a program (easy things like “write a for loop”).  Not exactly the stuff dreams are made of, so some people got creative.  One kid removed literally every piece of white space from his program, and another made some kind of art with indentations.  When people are bored, they seek clever things to do, and the result is ‘brilliant’ code.

The key to de-brillianting thus lies in presenting them with the right challenge, often via constraints or restrictions of some kind.  They do it to themselves otherwise — “I’ll write this feature without using the if keyword anywhere!”

The Right Motivation

Like I said, a simple solution does not necessarily imply an easy solution.  How does one challenge others into writing the kind of straightforward code that is readable and maintainable?

Code review/pairing presents a possible solution.  Given the earlier, “can others articulate what this does” metric, the team can challenge programmer-Einstein to channel that towering intellect toward this purpose.  That may work for some, but other brilliant programmers might not consider that to be a worthwhile or interesting challenge.

In that case, automated feedback through static analysis might do the trick.  FXCop, NDepend, SonarQube, and others can be installed and configured to steer things in the general direction of readability.  Writing code that complies with all warning thresholds of such tools actually presents quite a challenge, since so much of programming is about trade-offs.  Now, a sufficiently determined clever coder could still invent ways to write hard-to-read code, but that would be a much more difficult task when he’d get slapped by the tool for chaining 20 Linq statements onto a single line or whatever.

Of course, probably the best solution is to work with the sort of people who recognize that demonstrating their cleverness takes a backseat to being a professional.  They can do that in their spare time.

If you have a question you’d like to hear my opinion on, please feel free to submit.

By

How Collaboration Humanizes the Enterprise

Editorial Note: I originally wrote this post for the SmartBear blog.  You can check out the original, here, at their site.  While you’re there, take a look at their product offering and blog posts by their various authors.

I’ve spent enough time walking the halls of large-ish to massive organizations to have formed some opinions and made some observations. If I had to characterize the motor that drives these beasts, I’d say it is comprised of two main components: intense risk aversion and an obsession with waste elimination that sits somewhere on a spectrum between neurotic and quixotic.

Battleship

Let me explain.

Large organizations all started as small organizations, and all of them survived organizational accretion, besting competitors, dodging bad breaks, and successfully scaling to the point where they have a whole lot to lose. It is this last concern that drives the risk aversion; upstarts are constantly trying to unseat them and rent-seekers trying to sue them because they’re sitting on a pretty nice pot of gold. It is the scaling that motivates the waste elimination. Nothing scales perfectly and few things scale well, so organizations have to become insanely efficient in a number of ways to combat the natural downturns in efficiency caused by scale.

Risk Minimizing and Process Efficiency

With risk minimizing and waste elimination as near-universal goals, organizations tend to do some fairly predictable and recognizable things. Risk elimination takes the form of checks and balances, with pockets of “need to know” being created to isolate problems. For instance, a “security compliance” group may be created to review work product independently from the people producing the work, and it may even go so far as to seek outside certification. This sort of orthogonality and redundancy make it less likely the organization will be sued or compromised.

Unfortunately, though, redundancy exacerbates the other struggle, which is to eliminate waste in the name of efficiency. It’s not exactly efficient to have two separate groups spend time going over the same product and doing the same things, but with slightly altered focus. The organization compensates for this with hyper-specialty and process.

Read More

By

The Case for the NDepend Dashboard Feature

Editorial Note: I originally wrote this post for the NDepend blog.  Check out the original post here, at the site.  While you’re there, check out NDepend — it’s my go-to tool for the codebase assessments that I do as part of my consulting practice.

If you hang around agile circles long enough, you’re quite likely to hear the terms “big, visible chart” and “information radiator.”  I think both of these loosely originate from the general management concept that, to demonstrably improve something, you must first measure and track it.  A “big, visible chart” is information that an individual or team displays in, well, big and visible fashion.  An information radiator is more or less the same concept (I’m sure it’s possible for someone who is an 8th degree agile black belt to sharp-shoot this, but I think you’d be hard pressed to argue that this isn’t the gist).

Big, Visible Information Radiators

As perhaps the most ubiquitous example imaginable, consider the factory sign that proudly displays, “____ days since the last accident,” where, hopefully, the blank contains a large number.  A sign like this is far from a feel-good vanity metric; it actually alters behavior.  Imagine a factory where lots of accidents happen.  Line managers can call meetings and harp on the importance of safety, but probably to limited effect.  After all, the prospect of a floor incident is abstract, particularly for someone to whom it hasn’t ever happened.

But if you start putting a number on it, the concept becomes less abstract.  “Currently we have an incident every day, but we want to try to make it happen only once per month, and we’re going to keep track.”  Now, each incident means that the entire factory fails each and every day, and it does so visibly.  Incidents go from “someone else’s problem that you hear about anecdotally from time to time” to “the thing that’s making us fail visibly.”  And then you’ll find that doing nothing but making the number very visible will serve actually to alter behavior — people will be more careful so as not to be responsible for tanking the team’s metrics.

0 Days Since Last Accident

In the world of agile, the earliest and most common bit of information to see was the team’s card wall: which features were in progress, which were being tested, which were complete, and who was working on what.  This served double duty of creating public visibility/accountability and providing an answer to the project manager’s “whatcha doin?” without interruptions or mind-numbing status meetings.  But times and technologies progressed, resulting in other information being visible to the team at all times.

These days, it’s common to see a big television or monitor located near a team and displaying the status of the team’s code on the build machine.  Jenkins is a tool very commonly used to do this, and it will show you projects with red for failing and green for all good.  If you want to get creative, you can use home automation tech to have red or green lamps turn on and off.  For the team, this is a way of exposing broken builds as a deficiency and incenting team members to keep it in a consistently passing state.

Read More

By

Surviving The Dreaded Company Framework

This week, I’m making it two in a row with a reader question.  Today’s question is about an internal company framework and how to survive it.  Well, it’s actually about how to work effectively with it, but the options there are sort of limited to “get in there and fix it yourself” and “other” when the first option is not valid (which it often is not).

How does one work effectively with a medium to large sized company-internal framework? The framework has one or two killer features, but overall is poorly abstracted and [released] with little QA.

I think the best way for me to tackle this question is first to complain a little about Comcast, my erstwhile cable company, and then come around to the point.  They recently explained, in response to one of my occasional Comcast rage-tweets, “[The] promotional pricing is intended to offer you the services at a reduced rate, in the hopes that you enjoy them enough to keep them.”

This is, in a nutshell, the Comcast business model — it was an honest, and forthright tweet (unlike the nature of the business model itself).  What Comcast does is reminiscent of what grocery stores do when they flood the local shopping magazines with coupons: differential pricing.  Differential pricing is a tactic where in you charge different rates for the same product to different customers, generally on the basis of charging more when people are non-price averse.

The trouble is that outright differential pricing is usually illegal, so companies have to get creative.  With the grocery store, you can pay full price if you’re generally too busy for coupons, but if you load up with serious couponing, you can get that same grocery trip for half the price or less.  They can’t say, “you look rich so that’ll be $10.00 instead of $5.00” but they can make the thing $10.00 and serially discount it to $5.00 for people willing to jump through hoops.

Comcast does this same thing, but in a sneakier way.  They advertise “promotions” and “bundles” to such an extent that these appear to be the normal prices for things, which encourages people to sign up.  Then, after some time you’ll never keep track of and never be reminded of, the “regular” price kicks in.  For me, recently, this turned out to be the comical price of $139 per month for 24 Mbps of internet.

scan0002

When you call them to ask why your most recent bill was for $10,928 instead of $59.99, they’ll say “oh noes, too bad, it looks like your bundle expired!”  And this is where things take a turn for the farcical.  You can ask them for another bundle, and they’ll offer to knock your monthly bill down to $10,925.  If you want to secure the real savings, you have to pretend for a while to be canceling your service, get transferred over to the “retentions department,” and then and only then will you be offered to have your service returned to a price that isn’t absolutely insane.  I suspect that Comcast makes a lot of hey on the month or two that you get billed before you call up and do that again, because the ‘normal’ prices are equal parts prohibitive and laughable.

Why am I mentioning all this?  Well, it’s because when the time came for my most recent annual Comcast gouge ‘n’ threaten, things got a little philosophical.  I wound up on the phone with an agent to whom I confessed I was sick of this stupid charade.  Instead of arguing with me, he said something along the lines of, “yeah, it’s pretty ridiculous.  Before I started working here, I used to hate calling up and threatening them every year, but the thing is, all of the other companies do it too.”  This was either a guy being refreshingly honest, or a really shrewd customer service tactic or, perhaps, both.

But the interesting message came through loud and clear.  In my area, if you want TV and internet, it’s Comcast or it’s AT&T.  And if both of them behave this way, it goes to show you the power of a monopoly (or a nominally competing cartel).  Their motto is, in essence, “Comcast: it’s not like you have a choice.”

Read More

By

Does Github Enhance the Need for Code Review?

Editorial Note: I originally wrote this post for the SmartBear blog.  You can check out the original here, at their site.  Take a look around while you’re there and check out their products and other posts.

In 1999, a man named Eric S. Raymond published a book called, “The Cathedral and the Bazaar.”  In this book, he introduced a pithy phrase, “given enough eyeballs, all bugs are shallow,” that he named Linus’ Law after Linux creator Linus Torvalds.  Raymond was calling out a dichotomy that existed in the software world of the 1990s, and he was throwing his lot in with the heavy underdog at the time, the bazaar.  That dichotomy still exists today, after a fashion, but Raymond and his bazaar are no longer underdogs.  They are decisive victors, thanks in no small part to a website called Github.  And the only people still duking it out in this battle are those who have yet to look up and realize that it’s over and they have lost.

Cathedral

Cathedrals and Bazaars in the 1990s

Raymond’s cathedral was heavily-planned, jealously-guarded, proprietary software.  In the 1990s, this was virtually synonymous with Microsoft, but certainly included large software companies, relational database vendors, shrink-wrap software makers, and just about anyone doing it for profit.  There was a centrally created architecture and it was executed in top down fashion by all of the developer cogs in the for-profit machine.  The software would ship maybe every year, and in the run up to that time, the comparably few developers with access to the source code would hunt down as many bugs as they could ahead of shipping.  Users would then find the rest, and they’d wait until the next yearly release to see fixes (or, maybe, they’d see a patch after some months).  The name, “cathedral” refers to the irreducible nature of a medieval cathedral — everything is intricately crafted in all or nothing fashion before the public is admitted.

The bazaar, on the other hand, was open source software, represented largely at the time by Linux and Apache.  The source code for these projects was, obviously, free to all to look at and modify over the nascent internet.  Releases there happened frequently and the work was crowd-sourced as much as possible.  When bugs were found following a release, the users could and did hunt them down, fix them, and push the fix back to the main branch of the source code very quickly.  The cycle time between discovery and correction was much, much smaller.  This model was called the bazaar because of the comparably bustling, freewheeling nature of the cooperation; it resembled a loud, spontaneously organized marketplace that was surprisingly effective for regulating commerce.

Read More