DaedTech

Stories about Software

By

How to Actually Reduce Software Defects

Editorial Note: I originally wrote this post for the SmartBear blog.  You can check out the original here, at their site.  Have a look around while you’re there and see what some of the other authors have written.

As an IT management consultant, probably the most frequent question I hear is some variant of “how can we get our defect count down?” Developers may want this as a matter of professional pride, but it’s the managers and project managers that truly burn to improve on this metric. Our software does thousands of undesirable things in production, and we’d like to get that down to hundreds.

Fumigation

Almost invariably, they’re looking for a percentage reduction, presumably because there is some sort of performance incentive based on the defect count metric. And so they want strategies for reducing defects by some percentage, in the same way that the president of the United States might challenge his cabinet to trim 2% of the unemployment percentage in the coming years. The trouble is, though, that this attitude toward defects is actually part of the problem.

The Right Attitude toward Defects

The president sets a goal of reducing unemployment, but not of eliminating it. Why is that? Well, because having nobody in the country unemployed is simply impossible outside of a planned economy – people will quit and take time off between jobs or get laid off and have to spend time searching for new ones. Some unemployment is inevitable.

Management, particularly in traditional, ‘waterfall’ shops, tends to view defects in the same light. We clearly can’t avoid defects, but if we worked really hard, we could reduce them by half. This attitude is a core part of the problem.

It’s often met with initial skepticism, but what I tell these clients is that they should shoot for having no escaped defects (defects that make it to production, as opposed to ones that are caught by the team during testing). In other words, don’t shoot for a 20% or 50% reduction – shoot for not having defects.

It’s not that shooting for 100% will stretch teams further than shooting for 20% or 50%. There’s no psychological gimmickry to it. Instead, it’s about ceasing to view defects as “just part of writing software.” Defects are not inevitable, and coming to view them as preventable mistakes rather than facts of life is important because it leads to a reaction of “oh, wow, a defect – that’s bad, let’s figure out how that happened and fix it” instead of a reaction of “yeah, defects, what are you gonna do?”

When teams realize and accept this, they turn an important corner on the road to defect reduction.

Read More

By

The Power of CQLinq for Developers

Editorial Note: I originally wrote this post for the NDepend blog. Check out the original here, at their site.  While you’re there, have a look around at some of the other posts and subscribe to the RSS feed if you’d like a weekly post about static analysis.  

I can still remember my reaction to Linq when I was first exposed to it.  And I mean my very first reaction.  You’d think, as a connoisseur of the programming profession, it would have been, “wow, groundbreaking!”  But, really, it was, “wait, what?  Why?!”  I couldn’t fathom why we’d want to merge SQL queries with application languages.

Up until that point, a little after .NET 3.5 shipped, I’d done most of my programming in PHP, C++ and Java (and, if I’m being totally honest, a good bit of VB6 and VBA that I could never seem to escape).  I was new to C#, and, at that time, it didn’t seem much different than Java.  And, in all of these languages, there was a nice, established pattern.  Application languages were where you wrote loops and business logic and such, and parameterized SQL strings were where you defined how you’d query the database.  I’d just gotten to the point where ORMs were second nature.  And now, here was something weird.

But, I would quickly realize, here was something powerful.

Nascar

The object oriented languages that I mentioned (and whatever PHP is) are imperative languages.  This means that you’re giving the compiler/interpreter a step by step series of instructions on how to do something.  “For an integer i, start at zero, increment by one, continue if less than 10, and for each integer…”   SQL, on the other hand, is a declarative language.  You describe what you want, and let something else (e.g. the RDBMS server) sort out the details.  “I want all of the customer records where the customer’s city is ‘Chicago’ and the customer is less than 40 years old — you figure out how to do that and just give me the results.”

And now, all of a sudden, an object oriented language could be declarative.  I didn’t have to write loop boilerplate anymore!

Read More

By

Static Analysis and The Other Kind of False Positives

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at the NDepend site.  If you’re a fan of static analysis tools, do yourself a favor and take a look at the NDepend offering while you’re there.

A common complaint and source of resistance to the adoption of static analysis is the idea of false positives.  And I can understand this.  It requires only one case of running a tool on your codebase and seeing 27,834 warnings to color your view on such things forever.

There are any number of ways to react to such a state of affairs, though there are two that I commonly see.  These are as follows.

  1. Sheepish, rueful acknowledgement: “yeah, we’re pretty hopeless…”
  2. Defensive, indignant resistance: “look, we’re not perfect, but any tool that says our code is this bad is nitpicking to an insane degree.”

In either case, the idea of false positives carries the day.  With the first reaction, the team assumes that the tool’s results are, by and large, too advanced to be of interest.  In the second case, the team assumes that the results are too fussy.  In both of these, and in the case of other varying reactions as well, the tool is providing more information than the team wants or can use at the moment.  “False positive” becomes less a matter of “the tool detected what it calls a violation but the tool is wrong” and more a matter of “the tool accurately detected a violation, but I don’t care right now.”  (Of course, the former can happen as well, but the latter seems more frequently to serve as a barrier to adoption and what I’m interested in discussing today).

Is this a reason to skip the tool altogether?  Of course not.  And, when put that way, I doubt many people would argue that it is.  But that doesn’t stop people form hesitating or procrastinating when it comes to adoption.  After all, no one is thinking, “I want to add 27,834 things to the team’s to do list.”  Nor should they — that’s clearly not a good outcome.

Fumigation

With that in mind, let’s take a look at some common sources of false positives and the ways to address them.  How can you ratchet up the signal to noise ratio of a static analysis tool so that is valuable, rather than daunting?

Read More

By

Rethinking Assert with Shouldly

I was doing a bit of work with Tweetdeck open, when I noticed this tweet.

I’ve been using Assert.IsTrue() and its friends for years, so you might think I would take offense.  But instead, this struck me as an interesting and provocative statement.  I scanned through the conversation this started and it got me to thinking.

Over the years, I’ve evolved my unit tests heavily in the name of readability.  I’ve come to favor mocking frameworks on the basis of having fluent APIs and readable setup.  On a pointer from Steve Smith, I’ve adopted his philosophy and approach to naming unit test classes and tests.  My arrange and act inside of the tests have become highly readable and optimized for comprehension.

But then, there’s Assert.AreEqual.  Same as it ever was.

WorkHarder

Read More

By

Logging for Continuous Integration

Editorial Note: I originally wrote this post for the LogEntries blog.  Check out the original here, at their site.  While you’re there, take a look at the product offering, which includes storage, aggregation, and sophisticated search of your log information.

If you look at the title of this post, you’re probably thinking to yourself, “huh, that’s never really come up.”  Of course, it’s possible that you’re not.  But, in my travels as a consultant helping dev teams with practice and gap analysis, I’ve never had anyone ask me, “what do you recommend in terms of a logging solution for continuous integration?”

But hey, this is an easily solved problem, right?  After all, continuous integration means Jenkins, and Jenkins has an application log.  Perhaps that’s why no one is asking about it!  Now, all that’s left is to sit back and bask in the glow of every compiler warning your application has ever generated since the dawn of time.

Jenkins

What Actually Is Continuous Integration?

Now, I know what you’re thinking.  TeamCity is another continuous integration tool, and it also has logs.  Or what about TFS or Bamboo?  Jenkins doesn’t have sole possession of the continuous integration mind share.  There are any number of products designed for this purpose.

And thus we arrive at a popular misconception.

Continuous integration is not Jenkins.  It’s not Team City.  It’s not TFS or Bamboo.  And it’s also not the non-empty set that results from choosing one of the tools.  Continuous integration is a practice, not a tool.  And it’s actually a simple practice at that.

If you go back to basics via Wikipedia, you’ll find this definition.

Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies to a shared mainline several times a day.

Notice it does not say, “CI is where you hook your Github account up to Jenkins.”  There is no mention of any particular tool; it just describes the idea of developers’ source code never getting very far out of sync.  Cringe (appropriately), but you could just as easily achieve this by having developers collaborate using Notepad to edit source files housed on a shared Dropbox account.

Read More