DaedTech

Stories about Software

By

Fundamentals of Web Application Performance Testing

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at their offering that can help you with your own performance testing.

Software development, as a profession, has evolved in fits and starts over the years.  When I think back a couple of decades, I find myself a little amazed.  During the infancy of the web, hand-coding PHP (or PERL) live on a production machine seemed perfectly fine.

At first blush, that might just seem like sloppiness.  But don’t forget that stakes were much lower at the time.  Messing up a site that displayed song lyrics for a few minutes didn’t matter very much.  Web developers of the time had much less incentive to install pre-production verification processes.  Just make the changes and see if anything breaks.  Anything you don’t catch, your users will.

The Evolution of Web Application Testing

Of course, that attitude couldn’t survive much beyond the early days of dynamic web content.  As soon as e-commerce gained steam in the web development world, the stakes went up.  Amateurs walked the tightrope of production edits while professional shops started to create and test in development or sandbox environments.

As I said initially, this didn’t happen in some kind of uniform move.  Instead, it happened in fits and starts.  Some lagged behind the curve, continuing to rely on their users for testing.  Others moved testing into sandbox environments and pushed the envelope besides.  They began to automate.

Web development then took another step forward as automation worked its way into the testing strategy.  Sophisticated shops had their QA environments as a check on production releases.  But their developers also began to build automated test suites.  They then used these to guard against regression tests and to ensure proper application behavior.

Eventually, testing matured to a point where it spread out beyond straightforward unit test suites and record-playback-style integration tests.  Organizations got to know the so-called test pyramid.  They built increasingly sophisticated, nuanced test suites.

Web Application Testing Today

Building upon all of this backstory, we’ve seen the rise of the DevOps movement in recent years.  This movement emphasizes automating the entire delivery pipeline, from written code to production functioning.  So stakes for automated testing are higher than ever.  The only way to automate the whole thing is to have bulletproof verification.

This new dynamic shines a light on an oft-ignored element of the testing strategy.  I’m talking specifically about performance testing for your web application.  Automated unit and acceptance testing has long since become a de facto standard.  But now automated performance testing is getting to that point.

Think about it.  We got burned by hand-editing code on the production server.  So we set up sandboxes and tested manually.  Our applications grew too complex for manual testing to handle.  So we built test suites and automated these checks.  We needed production rolls more frequently.  So we automated the deployment process.  Now, we push code efficiently through build, test, and deployment.  But we don’t know how it will behave in the wild.

Web application performance testing fixes that.  If you don’t yet have such a strategy, you need one.  So let’s take a look at the fundamentals for adding this to your testing approach.  And I’ll keep this general enough to apply to your tech stack, whatever it may be.

Read More

By

Software Grows Too Quickly for Manual Review Only

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right, which can help you with automated code reviews.

How many development shops do you know that complain about having too much time on their hands?  Man, if only we had more to do.  Then we wouldn’t feel bored between completing the perfect design and shipping to production … said no software shop, ever.  Software proliferates far too quickly for that attitude ever to take root.

This happens in all sorts of ways.  Commonly, the business or the market exerts pressure to ship.  When you fall behind, your competitors step in.  Other times, you have the careers and reputations of managers, directors, and executives on the line.  They’ve promised something to someone and they rely on the team to deliver.  Or perhaps the software developers apply this drive and pressure themselves.  They get into a rhythm and want to deliver new features and capabilities at a frantic pace.

Whatever the exact mechanism, software tends to balloon outward at a breakneck pace.  And then quality scrambles to keep up.

Software Grows via Predictable Mechanisms

While the motivation for growth may remain nebulous, the mechanisms for that growth do not.  Let’s take a look at how a codebase accumulates change.  I’ll order these by pace, if you will.

  • Pure maintenance mode, in SDLC parlance.
  • Feature addition to existing products.
  • Major development initiatives going as planned.
  • Crunches (death marches).
  • Copy/paste programming.
  • Code generation.

Of course, you could offer variants on these themes, and they do not have mutual exclusivity.  But nevertheless, the idea remains.  Loosely speaking, you add code sparingly to legacy codebases in support mode.  And then the pace increases until you get so fast that you literally write programs to write your programs.

The Quality Conundrum

Now, think of this in another way.  As you go through the list above, consider what quality control measures tend to look like.  Specifically, they tend to vary inversely with the speed.

Even in a legacy codebase, fixes tend to involve a good bit of testing for fear of breaking production customers.  We treat things in production carefully.  But during major or greenfield projects, we might let that slip a little, in the throes of productivity.  Don’t worry — we’ll totally do it later.

But during a death march?  Pff.  Forget it.  When you slog along like that, tons of defects in production qualifies as a good problem to have.  Hey, you’re in production!

And it gets even worse with the last two items on my bulleted list.  I’ve observed that the sorts of shops and devs that value copy/paste programming don’t tend to worry a lot about verification and quality.  Does it compile?  Ship it.  And by the time you get to code generation, the problem becomes simply daunting.  You’ll assume that the tool knows what it’s doing and move on to other things.

As we go faster, we tend to spare fewer thoughts for quality.  Usually this happens because of time pressure.  So ironically, when software grows the fastest, we tend to check it the least.

Read More

By

The Importance of Performance on the Development Side

Editorial note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, take a look at their offering for monitoring your software in production.

In the software development world, we pay a ton of attention to the performance of our code in production.  And rightfully so.  A slow site can make you hemorrhage money in a variety of ways.

To combat this, the industry has brought some of its most impressive thinking to bear.  The entire DevOps movement focused on bringing things to production more efficiently and then managing it more efficiently.  Modern software development emphasizes this, staffs for it, and invests in it.

But looking beyond that, we leverage powerful tools.  Monitoring solutions let us get out in front of problems before users can discover them.  Alerting solutions allow us to configure those solutions to happen more effectively.  Entire organizations and industries has sprung up around creating seamless SaaS experiences for end users.

But in spite of all this, I’ve found that we have a curious blind spot as an industry.

A Tale of Misery

Not too long ago, I sat in a pub having a beer, waiting on my food to arrive.  With me sat a colleague that works for a custom app dev agency.  In this capacity, he visits clients and works on custom software solutions in their environments.

Over our beers, he described their environment and work.  They had some impressive development automation.  This included a continuous integration setup, gated builds, and push-button or automated deployments to various environments.  And once things were in production, they had impressive instrumentation for logging, tracking, and alerting about potential issues.

I told him that he was lucky.  He countered in a way that caught me a bit off guard.  “Yeah, it’s impressive, but actually working there as a developer is pretty rough.”

I asked him to clarify, so he did.  He told me about having to use slow, old development machines.  He talked about unit test suite runs taking forever locally and even longer in the build environments.  They had a lot of best practices in place, but actually getting anything done was really hard.  Even the source control server was flaky, sometimes kicking back attempted commits and creating issues.

This struck me as a fascinating contrast.

Read More

By

Should You Aim for 100 Percent Test Coverage?

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, check out all of the different code metrics and rules that NDepend offers.

Test coverage serves as one of the great lightning rods in the world of software development.  First, people ask whether it makes for a good metric at all.  Then they ask, if you want to use it as a metric, should you go for 100 percent coverage?  If not, what percentage should you go for? Maybe 42 percent, since that’s the meaning of life?

I don’t mean to trivialize an important discussion.  But sometimes it strikes me that this one could use some trivializing.  People dig in and draw battle lines over it, and counterproductive arguments often ensue.  It’s strange how fixated people get on this.

I’ll provide my take on the matter here, after a while.  But first, I’d like to offer a somewhat more philosophical look at the issue (hopefully without delving into overly abstract navel-gazing along the lines of “What even is a test, anyway, in the greater scheme of life?”)

What Does “Test Coverage” Measure?

First of all, let’s be very clear about what this metric measures.  Many in the debate — particularly those on the “less is more” side of it — quickly point out that test coverage does not measure the quality of the tests.  “You can have 100 percent coverage with completely worthless tests,” they’ll point out.  And they’ll be completely right.

To someone casually consuming this metric, the percentage can easily mislead.  After all, 100 percent coverage sounds an awful lot like 100 percent certainty.  If you hired me to do some work on your car and I told you that I’d done my work “with 100 percent coverage,” what would you assume?  I’m guessing you’d assume that I was 100 percent certain nothing would go wrong and that I invited you to be equally certain.  Critics of the total coverage school of thought point to this misunderstanding as a reason not to pursue that level of test coverage.  But personally, I just think it’s a reason to clarify definitions.

Read More

By

The ROI for Security Training

Editorial note: I originally wrote this post for the ASPE blog.  You can check out the original here, at their site.  While you’re there, check out their catalog of online and in-person training courses.

When it comes to IT’s relationship with “the business,” the two tend to experience a healthy tension over budget.  At the risk of generalizing, IT tends to chase promising technologies, and the business tends to reign that in.  And so it should go, I think.

The IT industry moves quickly and demands constant innovation.  For IT pros to enjoy success, they must keep up, making sure to constantly understand a shifting landscape.  And they also operate under a constant directive to improve efficiency which, almost by definition, requires availing themselves of tools.  They may write these tools or they might purchase them, but they need them either way.  In a sense, you can think of IT as more investment-thirsty than most facets of business.

The business’s leadership then assumes the responsibility of tempering this innovation push.  This isn’t to say that the business stifles innovation.  Rather, it aims to discern between flights of fancy and responsible investments in tech.  As a software developer at heart, I understand the impulse to throw time and money at a cool technology first and figure out whether that made sense second.  The business, on the other hand, considers the latter sensibility first, and rightfully so.

A Tale of IT and the Business

Perhaps a story will serve as a tangible example to drive home the point.  As I mentioned, my career background involved software development first.  But eventually, I worked my way into leadership positions of increasing authority, ending up in a CIO role, running an IT department.

One day while serving in that capacity, the guy in charge of IT support came to me and suggested we switch data centers.  I made a snap judgement that we should probably do as he suggested, but doing so meant changing the budget enough that it required a conversation with the CFO and other members of the leadership team.

Anticipating their questions and likely pushback, I asked the IT support guy to put together a business case for making the switch.  “Explain it in terms of costs and benefits such that a non-technical person could understand,” I advised.

This proved surprisingly difficult for him.  He put together documentation talking about the relative rates of power failures, circuit redundancy, and other comparative data center statistics.  His argument in essence boiled down to one data center having superior specs than the other and vague proclamations about best practices.

I asked him to rework this argument, suggesting he articulate the business case using sort of a mad lib: “If we don’t make this change, we have a ______% chance of experiencing problem _______, which would cost $_______.”

This proved much more fruitful. We made the case to the CFO and then made the switch.

Read More