DaedTech

Stories about Software

By

Software Grows Too Quickly for Manual Review Only

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right, which can help you with automated code reviews.

How many development shops do you know that complain about having too much time on their hands?  Man, if only we had more to do.  Then we wouldn’t feel bored between completing the perfect design and shipping to production … said no software shop, ever.  Software proliferates far too quickly for that attitude ever to take root.

This happens in all sorts of ways.  Commonly, the business or the market exerts pressure to ship.  When you fall behind, your competitors step in.  Other times, you have the careers and reputations of managers, directors, and executives on the line.  They’ve promised something to someone and they rely on the team to deliver.  Or perhaps the software developers apply this drive and pressure themselves.  They get into a rhythm and want to deliver new features and capabilities at a frantic pace.

Whatever the exact mechanism, software tends to balloon outward at a breakneck pace.  And then quality scrambles to keep up.

Software Grows via Predictable Mechanisms

While the motivation for growth may remain nebulous, the mechanisms for that growth do not.  Let’s take a look at how a codebase accumulates change.  I’ll order these by pace, if you will.

  • Pure maintenance mode, in SDLC parlance.
  • Feature addition to existing products.
  • Major development initiatives going as planned.
  • Crunches (death marches).
  • Copy/paste programming.
  • Code generation.

Of course, you could offer variants on these themes, and they do not have mutual exclusivity.  But nevertheless, the idea remains.  Loosely speaking, you add code sparingly to legacy codebases in support mode.  And then the pace increases until you get so fast that you literally write programs to write your programs.

The Quality Conundrum

Now, think of this in another way.  As you go through the list above, consider what quality control measures tend to look like.  Specifically, they tend to vary inversely with the speed.

Even in a legacy codebase, fixes tend to involve a good bit of testing for fear of breaking production customers.  We treat things in production carefully.  But during major or greenfield projects, we might let that slip a little, in the throes of productivity.  Don’t worry — we’ll totally do it later.

But during a death march?  Pff.  Forget it.  When you slog along like that, tons of defects in production qualifies as a good problem to have.  Hey, you’re in production!

And it gets even worse with the last two items on my bulleted list.  I’ve observed that the sorts of shops and devs that value copy/paste programming don’t tend to worry a lot about verification and quality.  Does it compile?  Ship it.  And by the time you get to code generation, the problem becomes simply daunting.  You’ll assume that the tool knows what it’s doing and move on to other things.

As we go faster, we tend to spare fewer thoughts for quality.  Usually this happens because of time pressure.  So ironically, when software grows the fastest, we tend to check it the least.

Read More

By

A Look at Some Unit Test Framework Options for .NET

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there have a look at their offerings, Prefix and Retrace.

If you enjoy the subject of human cognitive biases, you should check out the curse of knowledge.  When dealing with others, we tend to assume they know what we know.  And we do this when no justification for the assumption exists.

Do you fancy a more concrete example?  Take a new job and count how many people bombard you with company jargon and acronyms, knowing full well you just started a few hours ago.  This happens because these folks cannot imagine not knowing these things without expending considerable mental effort.

Why do I lead with this in a post about unit test frameworks?  Well, it seems entirely appropriate to me.  I earn my living as an IT management and strategy consultant, causing me to spend time at many companies helping them improve software development practice.  Because of this, I have occasion to see an awful lot of introductions to unit testing.  And these introductions usually subconsciously assume knowledge of unit testing.

“It’s easy!  Just pick a unit test runner and a coverage tool, and get those setup.  Oh, you’ll also probably want to pick a mocking framework, and here are some helpful Nuget packages.  Anyway, now just write a test.  We’ll start with a calculator class…”

Today, I will do my best to spare you that.  I have some practice with this, since I write a lot, publish courses, and train developers.  So let’s take a look at test frameworks.

What Are Unit Tests?

Thought you’d caught me there, didn’t you?  Don’t worry.  I won’t just assume you know these things.

Let’s start with unit testing in its most basic form, leaving all other subjects aside.  You want to focus on a piece of functionality in your code and test it in isolation.  For example, let’s say that we had the aforementioned Calculator class and that it contained an Add(int, int) method.  Let’s say that you want to write some code to test that method.

No magic there.  I just create a test called “CalculatorTester” and then write a method that instantiates and exercises Calculator.Add().  You could write this knowing nothing about unit testing practice at all.  And, if someone had told you to automate the testing of Calculator.Add(), you may have done this exact thing.

Congratulations.  You have written a unit test.   I say this because it focuses on a method and tests it in isolation.

Read More

By

The Merits and Ethics of Learning on the Job in App Dev

Another Monday, another successful reader question Monday.  Today’s question is about learning on the job.  Let’s take a look.

This is more of an app dev shop question than a consulting question, but here goes.

Say you’re consulting for a client and the client has a specific need for some app dev work which they ask you to do. That work requires you to use a technology that’s either completely new to you, not quite in your line of specialty, or you haven’t used in several years, but you’re constrained in some way to use this technology and to do the project you would need to “ramp-up”.

The question is: do you bill the client for this “ramp-up” time or do you take the hit against your own personal time to come up to speed? Does it make a difference in the response if it’s a legacy technology that you may never use again (FORTRAN) vs. a “hot” language that it would benefit you to learn (Python)?

First of all, for everyone else, here’s what the reader is referring to about consulting vs. app dev.  I’ve written a good bit about how writing software for someone else isn’t “consulting” no matter what people might call it.  So the question concerns either app dev shops or app dev freelancers.

Learning on the Job

Learning on the Job: The Quick Answer

I’ll start by answering the question at the tactical level.  Should you bill for this ramp-up time?  Absolutely, as long as you can negotiate it and you do so in good faith.

The last time I agreed to app dev work, I dealt with something like this.  I’d worked with the client before, and they liked my work.  I had domain knowledge and we had a good rapport, so they engaged me for some work.  I told them that I had to learn an important technology as I went, and they were fine with this added cost.  They viewed it as worth it.

Contrast this with a different situation.  Imagine you’re a Ruby on Rails developer and you need some work.  So you answer an RFP for help with an ASP MVC site, knowing nothing about ASP MVC but claiming you’re competent.  That’s pretty much the opposite end of the spectrum, and obviously problematic.

So start with the requirement that you’ll be frank about anything you need to learn to execute on an engagement.  The matter of who bears the cost of that learning then becomes a matter of negotiation as part of the contract.  And, that should make sense, since you’ll always have to learn something to help a client.  If you need the gig more than they need you, you might need to spend unpaid evenings learning a framework.  If they really need you and you’re on the fence, you can require that they include your onboarding as part of the deal.

Read More

By

The Importance of Performance on the Development Side

Editorial note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, take a look at their offering for monitoring your software in production.

In the software development world, we pay a ton of attention to the performance of our code in production.  And rightfully so.  A slow site can make you hemorrhage money in a variety of ways.

To combat this, the industry has brought some of its most impressive thinking to bear.  The entire DevOps movement focused on bringing things to production more efficiently and then managing it more efficiently.  Modern software development emphasizes this, staffs for it, and invests in it.

But looking beyond that, we leverage powerful tools.  Monitoring solutions let us get out in front of problems before users can discover them.  Alerting solutions allow us to configure those solutions to happen more effectively.  Entire organizations and industries has sprung up around creating seamless SaaS experiences for end users.

But in spite of all this, I’ve found that we have a curious blind spot as an industry.

A Tale of Misery

Not too long ago, I sat in a pub having a beer, waiting on my food to arrive.  With me sat a colleague that works for a custom app dev agency.  In this capacity, he visits clients and works on custom software solutions in their environments.

Over our beers, he described their environment and work.  They had some impressive development automation.  This included a continuous integration setup, gated builds, and push-button or automated deployments to various environments.  And once things were in production, they had impressive instrumentation for logging, tracking, and alerting about potential issues.

I told him that he was lucky.  He countered in a way that caught me a bit off guard.  “Yeah, it’s impressive, but actually working there as a developer is pretty rough.”

I asked him to clarify, so he did.  He told me about having to use slow, old development machines.  He talked about unit test suite runs taking forever locally and even longer in the build environments.  They had a lot of best practices in place, but actually getting anything done was really hard.  Even the source control server was flaky, sometimes kicking back attempted commits and creating issues.

This struck me as a fascinating contrast.

Read More

By

Should You Aim for 100 Percent Test Coverage?

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, check out all of the different code metrics and rules that NDepend offers.

Test coverage serves as one of the great lightning rods in the world of software development.  First, people ask whether it makes for a good metric at all.  Then they ask, if you want to use it as a metric, should you go for 100 percent coverage?  If not, what percentage should you go for? Maybe 42 percent, since that’s the meaning of life?

I don’t mean to trivialize an important discussion.  But sometimes it strikes me that this one could use some trivializing.  People dig in and draw battle lines over it, and counterproductive arguments often ensue.  It’s strange how fixated people get on this.

I’ll provide my take on the matter here, after a while.  But first, I’d like to offer a somewhat more philosophical look at the issue (hopefully without delving into overly abstract navel-gazing along the lines of “What even is a test, anyway, in the greater scheme of life?”)

What Does “Test Coverage” Measure?

First of all, let’s be very clear about what this metric measures.  Many in the debate — particularly those on the “less is more” side of it — quickly point out that test coverage does not measure the quality of the tests.  “You can have 100 percent coverage with completely worthless tests,” they’ll point out.  And they’ll be completely right.

To someone casually consuming this metric, the percentage can easily mislead.  After all, 100 percent coverage sounds an awful lot like 100 percent certainty.  If you hired me to do some work on your car and I told you that I’d done my work “with 100 percent coverage,” what would you assume?  I’m guessing you’d assume that I was 100 percent certain nothing would go wrong and that I invited you to be equally certain.  Critics of the total coverage school of thought point to this misunderstanding as a reason not to pursue that level of test coverage.  But personally, I just think it’s a reason to clarify definitions.

Read More