DaedTech

Stories about Software

By

Is There a Correct Way to Comment Your Code?

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, take a look at all of the visualizations and metrics that you can get about your codebase.

Given that I both consult and do a number of public things (like blogging), I field a lot of questions.  As a result, the subject of code comments comes up from time to time.  I’ll offer my take on the correct way to comment code.  But remember that I am a consultant, so I always have a knee-jerk response to say that it depends.

Before we get to my take, though, let’s go watch programmers do what we love to do on subjects like this: argue angrily.  On the subject of comments, programmers seem to fall roughly into two camps.  These include the “clean code needs no comments” camp and the “professionalism means commenting” camp.  To wit:

Chances are, if you need to comment then something needs to be refactored. If that which needs to be refactored is not under your control then the comment is warranted.

And then, on the other side:

If you’re seriously questioning the value of writing comments, then I’d have to include you in the group of “junior programmers,” too.  Comments are absolutely crucial.

Things would probably go downhill from there fast, except that people curate Stack Overflow against overt squabbling.

Splitting the Difference on Commenting

Whenever two sides entrench on a matter, diplomats of the community seek to find common ground.  When it comes to code comments, this generally takes the form of adages about expressing the why in comments.  For example, consider this pithy rule of thumb from the Stack Overflow thread.

Good programmers comment their code.

Great programmers tell you why a particular implementation was chosen.

Master programmers tell you why other implementations were not chosen.

Jeff Atwood has addressed this subject a few different times.

When you’ve rewritten, refactored, and rearchitected your code a dozen times to make it easy for your fellow developers to read and understand — when you can’t possibly imagine any conceivable way your code could be changed to become more straightforward and obvious — then, and only then, should you feel compelled to add a comment explaining what your code does.

Junior developers rely on comments to tell the story when they should be relying on the code to tell the story.

And so, as with any middle ground compromise, both entrenched sides have something to like (and hate).  Thus, you might say that whatever consensus exists among programmers, it leans toward a “correct way” that involves commenting about why.

Read More

By

How to Evaluate Software Quality from Source Code

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at their Retrace offering that gives you everything you need to track down production issues.

I’ll understand if you read the title of this post and smirked.  I probably would have done so, opening it up only to see what profound wisdom awaited me.  Review the code, Captain Obvious.  

So yes, rest assured, I understand the easy assumption that one can ascertain a codebase’s quality by opening it up and starting to review it.  But what does this really tell you?  What comes out of this activity?  Simply put, your opinion of the codebase’s quality comes out of this activity.

I actually have a consulting practice doing custom static analysis on client codebases.  I help managers and executives make strategic decisions about their applications.  And I do this largely by treating their code as data and building numerically based cases.

Initially, the idea for this practice arose out of some observations I’d made a while back.  I watched consultants tasked with critical evaluations of codebases, and I found that they did exactly what I mentioned in the first paragraph.  They reviewed it, operating from the premise, I’m an expert, so I’ll record my anecdotal impressions and offer them as evidence.  That put a metaphorical pebble in my shoe and bothered me.  So I decided to chase a more empirical concept of code quality with my practice.

Don’t get me wrong.  The proprietary nature of source code and outcome data in the industry makes truly scientific experiments difficult.  But I can still automate the inquiries and use actual, relative data to compare properties.  So from that perspective, I’ll offer you more data-driven ways to evaluate software quality from source code.

Read More

By

How to Evaluate Software Quality from the Outside In

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at Prefix and Retrace, for all of your prod support needs.

In a sense, application code serves as the great organizational equalizer.  Large or small, complex or simple, enterprise or startup, all organizations with in-house software wrangle with similar issues.  Oh, don’t misunderstand.  I realize that some shops write web apps, others mobile apps, and still others hodgepodge line of business software.  But when you peel away domain, interaction, and delivery mechanism, software is, well, software.

And so I find myself having similar conversations with a wide variety of people, in a wide variety of organizations.  I should probably explain just a bit about myself.  I work as an independent IT management consultant.  But these days, I have a very specific specialty.  Companies call me to do custom static analysis on their codebases or application portfolios, and then to present the findings to leadership as the basis for important strategic decisions.

As a simple example, imagine a CIO contemplating the fate of a 10 year old Java codebase.  She might call me and ask, “should I evolve this to meet our upcoming needs, or would starting from scratch prove more cost effective in the long run.”  I would then do an assessment where I treated the code as data and quantified things like dependence on outdated libraries (as an over-simplified example).  From there, I’d present a quantitatively-driven recommendation.

So you can probably imagine how I might call code a great equalizer.  It may do different things, but it has common structural, underpinnings that I can quantify.  When I show up, it also has another commonality.  Something about it prompts the business to feel dissatisfied.  I only get calls when the business has experienced bad outcomes as measured by software quality from the outside in.

Read More

By

If You Automate Your Tests, Automate Your Code Review

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right.

For years, I can remember fighting the good fight for unit testing.  When I started that fight, I understood a simple premise.  We, as programmers, automate things.  So, why not automate testing?

Of all things, a grad school course in software engineering introduced me to the concept back in 2005.  It hooked me immediately, and I began applying the lessons to my work at the time.  A few years and a new job later, I came to a group that had not yet discovered the wonders of automated testing.  No worries, I figured, I can introduce the concept!

Except, it turns out that people stuck in their ways kind of like those ways.  Imagine my surprise to discover that people turned up their nose at the practice.  Over the course of time, I learned to plead my case, both in technical and business terms.  But it often felt like wading upstream against a fast moving current.

Years later, I have fought that fight over and over again.  In fact, I’ve produced training materials, courses, videos, blog posts, and books on the subject.  I’ve brought people around to see the benefits and then subsequently realize those benefits following adoption.  This has brought me satisfaction.

But I don’t do this in a vacuum.  The industry as a whole has followed the same trajectory, using the same logic.  I count myself just another advocate among a euphony of voices.  And so our profession  has generally come to accept unit testing as a vital tool.

Widespread Acceptance of Automated Regression Tests

In fact, I might go so far as to call acceptance and adoption quite widespread.  This figure only increases if you include shops that totally mean to and will definitely get around to it like sometime in the next six months or something.  In other words, if you count both shops that have adopted the practice and shops that feel as though they should, acceptance figures certainly span a plurality.

Major enterprises bring me in to help them teach their developers to do it.  Still other companies consult and ask questions about it.  Just about everyone wants to understand how to realize the unit testing value proposition of higher quality, more stability, and fewer bugs.

This takes a simple form.  We talk about unit testing and other forms of testing, and sometimes this may blur the lines.  But let’s get specific here.  A holistic testing strategy includes tests at a variety of granularities.  These comprise what some call “the test pyramid.”  Unit tests address individual components (e.g. classes), while service tests drive at the way the components of your application work together.  GUI tests, the least granular of all, exercise the whole thing.

Taken together, these comprise your regression test suite.  It stands against the category of bugs known as “regressions,” or defects where something that used to work stops working.  For a parallel example in the “real world” think of the warning lights on your car’s dashboard.  “Low battery” light comes on because the battery, which used to work, has stopped working.

Read More

By

Characteristics of Good APIs

Editorial note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, take a look at their monitoring solutions.

The term “API” seems to present something of a Rorschach Test for software developers.  Web developers think of APIs as REST endpoints and WSDLs.   In contrast, desktop developers may think of APIs as “library code” written by developers from other organizations.  Folks writing low level code like drivers?  API provides the hooks into OS system calls (emphasis mine).

I understand this mild myopia that can happen, but the bigger picture matters.  API offers a much more generic way of thinking.  I personally like the definition from “tech terms.”

An API is a set of commands, functions, protocols, and objects that programmers can use to create software or interact with an external system. It provides developers with standard commands for performing common operations so they do not have to write the code from scratch.

An API defines how other programmers interact with your software.  Thus each of the personas I mentioned have the right answer, but a small slice of it.  As you write code, ask yourself about the user of that code.  If that user interacts with your code via code of their own, you’re building an API.  And before you ask, yes, that even applies to other developers in your company or even your team that use your code.

Consequently, understanding properties of good APIs carries vital importance for our industry.  It makes the difference between others building on your work or avoiding it.  And, let’s be honest — there’s a certain amount of professional pride at stake.  So what makes APIs good?  Let’s consider some characteristics of good APIs.

Read More