DaedTech

Stories about Software

By

Transitioning from Manual to Automated Code Review

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right.

I can almost sense the indignation from some of you.  You read the title and then began to seethe a little.  Then you clicked the link to see what kind sophistry awaited you.  “There is no substitute for peer review.”

Relax.  I agree with you.  In fact, I think that any robust review process should include a healthy amount of human and automated review.  And, of course, you also need your test pyramid, integration and deployment strategies, and the whole nine yards.  Having a truly mature software shop takes a great deal of work and involves standing on the shoulders of giants.  So, please, give me a little latitude with the premise of the post.

Today I want to talk about how one could replace manual code review with automated code review only, should the need arise.

Why Would The Need for This Arise?

You might struggle to imagine why this would ever prove necessary.  Those of you with many years logged in the enterprise in particular probably find this puzzling.  But you might find manual code inspection axed from your process for any number of reasons other than, “we’ve decided we don’t value the activity.”

First and most egregiously, a team’s manager might come along with an eye toward cost savings.  “I need you to spend less time reading code and more time writing it!”  In that case, you’ll need to move away from the practice, and going toward automation beats abandoning it altogether.  Of course, if that happens, I also recommend dusting off your resume.  In the first place, you have a penny-wise, pound-foolish manager.  And, secondly, management shouldn’t micromanage you at this level.  Figuring out how to deliver good software should be your responsibility.

But let’s consider less unfortunate situations.  Perhaps you currently work on a team of 2, and number 2 just handed in her two week’s notice.  Even if your organization back-fills your erstwhile teammate, you have some time before the newbie can meaningfully review your code.  Or, perhaps you work for a larger team, but everyone gradually becomes so busy and fragmented in responsibility as not to have the time for much manual peer review.

In my travels, this last case actually happens pretty frequently.  And then you have to chose: abandon the practice altogether, or move toward an automated version.  Pretty easy choice, if you ask me.

Read More

By

Things Everyone Forgets Before Committing Code

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, take a look around at some of the other posts as well.

Committing code involves, in a dramatic sense, two universes colliding.  Firstly, you have the universe of your own work and metaphorical workbench.  You’ve worked for some amount of time on your code, hopefully in a state of flow.  And secondly, you have the universe of the team’s communal work product.  And so when you commit, you force these universes together by foisting your recent work on the team.

In bygone years, this created far more heartburn for the average team than it does today.  Barbaric as it may seem, I can actually remember a time when some professional software developers didn’t use source control.  A “commit” thus involved literally overwriting a file on a shared drive, obliterating all trace of the previous version.  (Sometimes, you might create a backup copy of the folder).  Here, your universe actually kind of ate the team’s communal universe.

More Frequent Commits, Fewer Problems

But, even in the earliest days of my career, lack of source control represented sloppy process.  I remember installing the practice in situations that lacked it.  But even with source control in place, people tended to go off and code in their own world for weeks or even months during feature development.  Only when release time neared did they start to have what the industry affectionately calls “merge parties,” wherein the team would spend days or weeks sorting out all of the instances where their changes trampled one another’s.

In the interceding years, the industry has learned the wisdom of continuous integration (CI).  CI builds on the premise, “if it hurts, do it more,” by encouraging frequent, lower stakes commits.  These days, most teams commit on the order of hours, rather than weeks or months.  This significantly lowers the onerousness of universes colliding.

But it doesn’t eliminate the problem altogether, even in teams that live the CI dream.  No matter how frequently you do it and how sophisticated the workflows around modern source control, you still have the basic problem of putting your stuff into the team’s universe.  And this comes with the metaphorical risk of leaving your tools laying around where someone can trip over them.

So today, let’s take a look at some of the most common things everyone forgets before committing code.  And, for the purposes of the post, I’ll remain source control agnostic, with the parlance “commit” meaning generally to sync your files with the team’s.

Read More

By

Static Analysis Issue Management Gets a Boost

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, have a look at the features in NDepend’s latest version.

Years ago, I led a team of software developers.  We owned an eclectic portfolio of software real estate.  It included some Winforms, Webforms, MVC, and even a bit of WPF sprinkled into the mix.  And, as with any eclectic neighborhood, the properties came in a variety of ages and states of repair.

Some of this code depended on a SQL Server database that had a, let’s just say, casual relationship with normalization.  Predictably, this caused maintenance struggles.  But, beyond that, it caused a credibility gap when we spoke to non-technical stakeholders.  “What do you mean you can’t give a definitive answer to how many sales we made last year?”  “Well,” I’d try to explain, “I can’t say for sure because the database doesn’t explicitly define the concept of a sale.”

Flummoxed by the mutual frustration, I tried something a bit different.  Since I couldn’t easily explain the casual, implied relationships in the database, I decided to do a show and tell.  First, I went out and found a static analyzer for database schema.  Then, I brought in some representative stakeholders and said, “watch this.”  With a flourish (okay, not really), I turned the analyzer loose on the schema.

While they didn’t grok my analogies, the tens of thousands of warnings and errors made an impression.  In fact, it sort of terrified them.  But this did bridge the credibility gap and show them that we all had some work to do.  Mission accomplished.

Static Analyzer Issues

I engaged in something of a relationship hack with my little ploy.  You see, I know how this static analyzer would behave because I know how all of them tend to behave.  They earn their keep by carpet bombing your codebase with violations and warnings.  Out of the box, they overwhelm, and then they leave it to you to dial it back.  Truly, you can take this behavior to the bank.

So I knew that this creaky database would trigger thousands upon thousands of violations.  And then I just sat back waiting for the “magic” to happen.

I mention all of this to paint a picture of how static analyzers typically regard the concept of “issue.”  All categories of severity and priority generally roll up into this catch-all term, and it then refers to the itemized list of everything.  Your codebase has issues and it has lots of them.  This is how the tool earns its mind share and keep — by proving how much it can surface, and then doing so.

Thus you might define the concept simply as “all that stuff the static analyzer finds.”

Read More

By

How to Evaluate Software Quality from the Outside In

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at Prefix and Retrace, for all of your prod support needs.

In a sense, application code serves as the great organizational equalizer.  Large or small, complex or simple, enterprise or startup, all organizations with in-house software wrangle with similar issues.  Oh, don’t misunderstand.  I realize that some shops write web apps, others mobile apps, and still others hodgepodge line of business software.  But when you peel away domain, interaction, and delivery mechanism, software is, well, software.

And so I find myself having similar conversations with a wide variety of people, in a wide variety of organizations.  I should probably explain just a bit about myself.  I work as an independent IT management consultant.  But these days, I have a very specific specialty.  Companies call me to do custom static analysis on their codebases or application portfolios, and then to present the findings to leadership as the basis for important strategic decisions.

As a simple example, imagine a CIO contemplating the fate of a 10 year old Java codebase.  She might call me and ask, “should I evolve this to meet our upcoming needs, or would starting from scratch prove more cost effective in the long run.”  I would then do an assessment where I treated the code as data and quantified things like dependence on outdated libraries (as an over-simplified example).  From there, I’d present a quantitatively-driven recommendation.

So you can probably imagine how I might call code a great equalizer.  It may do different things, but it has common structural, underpinnings that I can quantify.  When I show up, it also has another commonality.  Something about it prompts the business to feel dissatisfied.  I only get calls when the business has experienced bad outcomes as measured by software quality from the outside in.

Read More

By

What is Real User Monitoring?

Editorial note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, have a look around at their assortment of monitoring solutions.

Perhaps you’ve heard the term “real user monitoring” in passing.  Our industry generates no shortage of buzzwords, many of the them vague and context dependent.  So you could be forgiven for scratching your head at this term.

Let’s go through it in some detail, in order to provide clarity.  But to do that, I’m going to walk you through the evolution of circumstance that created a demand for real user monitoring.  You can most easily understand a solution by first understanding the problem that it solves.

A Budding Entrepreneur’s Website

Let’s say that the entrepreneurial bug bites you.  You decide to build some kind of software as a service (SaaS) product.  Obviously, you need some time to build it and make it production ready, so you pick a target go-live date months from now.

But you know enough about marketing to know that you should start building hype now.  So, you put together a WordPress site and start a blog, looking to build a following.  Then, excited to get going, you make a series of post.

And then, nothing.  I mean, you didn’t expect to hit the top of Hacker News, but you expected… something.  No one comments on social media or emails you to congratulate you or anything at all.

Frustrated, you decide to add a commenting plugin and some social media share buttons.  This, you reason, will provide a lower friction means of offering feedback.  And still, absolutely nothing.  Now you begin to wonder if your host provider hasn’t played a cruel trick on you in which it only serves the site when you visit.

The Deafening Lack of Feedback

If perhaps it sounds like I empathize, that’s because I sincerely do.  Years and years ago when I started my first blog, I posted into the ether.  I had no idea if anyone read those early posts.  Of course, I was just having a good time putting my opinions out there and not trying to make a living, so I didn’t worry.  But nevertheless, I eventually felt frustrated.

The frustration arises from a lack of feedback.  You take some actions and then have no ability to see what affect they have.  Sure, you can see the post go live in your browser, but are you reaching anyone?  Has a single person read the post?  Or have thousands read the post and found it boring?  It’s like writing some code, but you’re required to hand it off to someone else to compile, run, and observe.  You feel blind.

Read More