DaedTech

Stories about Software

By

What Is Performance Testing? An Explanation for Business People

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at their comprehensive solution for gaining insight into your application’s performance.

The world of enterprise IT neatly divides concerns between two camps: IT and the business.  Technical?  Then you belong in the IT camp.  Otherwise, you belong in the business camp.

Because of this division, an entire series of positions exists to help these groups communicate.  But since I don’t have any business analysts at my disposal to interview, I’ll just bridge the gap myself today.  From the perspective of technical folks, I’ll explain performance testing in language that matters to the business.  And, don’t worry.  I’ve spent enough years doing IT management consulting to have mostly overcome the curse of knowledge.  I can empathize with anyone understanding performance testing only in the vaguest of terms.

A Tale of Vague Woe

To prove it, let me conjure up a maddening hypothetical.  You’ve coordinated with the software organizations for months.  This has included capturing the voice of the customer and working with architects and developers to create a vision for the product.  You’ve seen the project through conception, beta testing and eventual production release.  And you’re pretty proud of the way this has gone.

But now you find yourself more than a little exasperated.  A few scant months after the production launch, weird and embarrassing things continue to go wrong.  It started out as a trickle and then grew into a disturbing, steady stream.  Some users report agonizingly slow experiences while others don’t complain.  Some report seeing weird error messages and screens, while others have a normal experience.  And, of course, when you try to corroborate these accounts, you don’t see them.

You inquire with the software developers and teams, but you can tell they don’t quite take this at face value.  “Hmmm, that shouldn’t happen,” they tell you.  Then they concede that maybe it could, but they shrug and say there’s not much they can do unless you help them reproduce the issue.  Besides, you know users, amirite?  Always reporting false issues because they have unrealistic expectations.

Sometimes you wonder if the developers don’t have the right of it.  But you know you’re not imagining the exasperated phone calls and negative social media interactions.  Worse, paying users are leaving, and fewer new ones sign up.  Whether perception or reality, that hits you in the pocketbook.

Read More

By

How to Evaluate Your Static Analysis Process

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download a trial of NDepend and take a look at its static analysis capabilities.

I often get inquiries from clients and prospects about setting up and operationalizing static analysis.  This makes sense.  After all, we live in a world short on time and with software developers in great demand.  These clients always seem to have more to do than bandwidth allows.  And static analysis effectively automates subtle but important considerations in software development.

Specifically, it automates peer review to a certain extent.  The static analyzer acts as a non-judging, mute reviewer of sorts.  It also stands in for a tiny bit of QA’s job, calling attention to possible issues before they leave the team’s environment.  And, finally, it helps you out by acting as architect.  Team members can learn from the tool’s guidance.

So, as I’ve said, receiving setup inquiries doesn’t surprise me.  And I applaud these clients for pursuing this path of improvement.

What does surprise me, however, is how few organizations seem to ask another, related question.  They rarely ask for feedback about the efficacy of their currently implemented process.  Many organizations seem to consider static analysis implementation a checkbox kind of activity.  Have you done it?  Check.  Good.

So today, I’ll talk about checking in on an existing static analysis implementation.  How should you evaluate your static analysis process?

Read More

By

How Much Unit Testing is Enough?

Editorial note: I originally wrote this post for the TechTown blog.  You can check out the original here, at their site.  While you’re there, have a look at their training and course offerings.

I work as an independent consultant, and I also happen to have created a book and some courses about unit testing.  So I field a lot of questions about the subject.  In fact, companies sometimes bring me in specifically to help their developers adopt the practice.  And those efforts usually fall into a predictable pattern.

Up until the moment that I arrive, most of the developers have spent zero time unit testing.  But nevertheless, they have all found something to occupy them full time at the office.  So they must now choose: do less of something else or start working overtime for the sake of unit testing.

This choice obviously inspires little enthusiasm.  Most developers I meet in my travels are conscientious folks.  They’ve been giving it their all and so they interpret this new thing that takes them away from their duties as a hindrance.  And, in spite of signing a contract to bring me in, management harbors the same worry.  While they buy into the long term benefit, they fret the short term cost.

All of this leads to a very predictable question.  How much unit testing is enough?

Clients invariably ask me this, and usually they ask it almost immediately.  They want to understand how to spend the minimum amount of time required to realize benefits, but not a second longer.  This way they can return to the pressing matters they’ve delayed for the sake of learning this new skill.

Read More

By

Are You Aggressively Trying to Automate Code Review?

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at their automated analysis and documentation tooling.

Before I talk in detail about trying to automate code review, do a mental exercise.  Close your eyes and picture the epitome of a soul-crushing code review.

You probably sit in a stuffy conference room with several other people.  With your IDE open and laptop plugged into the projector via VGA cable, you begin.  Or, rather, they begin.  After all, your shop does code review more like thesis defense than collaboration.  So the other participants commence grilling you about your code as if it oozed your incompetence from every line.

This likely goes on for hours.  No nit remains unpicked, however trivial.  You’ve even taken to keeping a spreadsheet full of things to check ahead of code reviews so as not to make the same mistake twice.  That spreadsheet now has hundreds of lines.  And some of those lines directly contradict one another.

When the criticism-a-thon ends, you feel tired, depressed, and hungry.  But, looking on the bright side, you realize that this is the furthest you’ll ever be from the next code review.

It probably sounds like I speak from experience because I do.  I’ve seen this play out in software development shops and even written a blog post about it in the past.  But let’s look past the depressing human element of this and understand how it proves bad for business.

Read More

By

Addressing Malware Detection from the Outside In

Editorial note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, have a look around at the different types of production monitoring that they offer.

I worked as a software engineer for almost the entire decade of the 2000s.  While I was earning a living this way, computers were making their way from CS student dorm rooms to Grandma’s den.  Like so many other programmers of the time, I thus acquired the role of unofficial tech support for computer illiterate friends and family.  I do not miss those days in which malware detection became an involuntary hobby.

Back in 2005, everyone had computers with Windows XP or Windows 98.  And every computer with Windows XP or Windows 98 seemed to attract malware like flies to flypaper.  So I found myself sitting in front of CRT monitors next to dusty towers, figuring out why nothing worked.

I still kind of remember the playbook.  For instance, Lavasoft had a product called Ad Aware.  I also seem to recall something called MalwareBytes.  I favored these because I could download them for free.  At least, I could download them for free assuming the victim’s computer was even capable.  With those tools in place, and with a heaping helping of googling from my own laptop, I would painstakingly scan, sweep, remove, tweak, and repeat.  Eventually, I won. Usually.

It seems strange to think about now.  Ten or twelve years ago, consumers compared brands of antivirus software the way we compare music apps on our phones.  Malware detection dominated our computing consciousness, even for casual users.  But today?  I can’t remember the last time I ran an antivirus scan on my laptops.  I suspect you can’t either.  So what happened to all of this malware?  Did it simply disappear?

The Silencing of Malware

Well, no.  Malware didn’t disappear.  The criminals and spammers of the world didn’t just one day decide to do something better with their lives.  In fact, you might argue that they became more effective.

Most of the pieces of malware from my younger days had lots of bark and little bite.  They’d install themselves on your computer and hijack your browser with obnoxious graphics or spew error messages until your machine crashed.  Some came from would-be vandals, while others tried unsuccessfully to do things sneakily.

But causing some computer neophyte to say “this doesn’t seem right” and call up a young me to fix things — well, it hardly constitutes successful sneaking.  It always seemed, in those days, that malware authors sought mainly to annoy.  And malware detection and removal sought to fix the inconvenience.

In more recent years, however, the consumer annoyance factor has mostly disappeared.  Why?  Because there’s no profit in it.  Today’s malware instead aims to help its authors and users make money.  It does this by quietly gathering data, sending out spam, gaming search engines, and stealing information.  And it does all of this under the radar.

Read More