DaedTech

Stories about Software

By

In Defense of Using Your Users as Testers

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download a trial of NDepend and take it for a spin; you can try it for free.

In most shops of any size, you’ll find a person that’s just a little too cynical.  I’m a little cynical myself, and we programmers tend to skew that way.  But this guy takes it one step further, often disparaging the company in ways that you think must be career-limiting.  And they probably are, but that’s his problem.

Think hard, and some man or woman you’ve worked with will come to mind.  Picture the person.  Let’s call him Cynical Chad. Now, imagine Chad saying, “Testing? That’s what our users are for!”  You’ve definitely heard someone say this at least once in your career.

This is an oh-so-clever way to imply that the company serially skimps on quality.  Maybe they’re always running behind a too-ambitious schedule.  Or perhaps they don’t like to spend the money on testing.  I’m sure Chad would be happy to regale you with tales of project manager and QA incompetence.  He’ll probably tell you about your own incompetence too, if you get a couple of beers in him.

But behind Chad’s casual maligning of your company lies a real phenomenon.  With their backs against the wall, companies will toss things into production, hope for the best, and rely on users to find defects.  If this didn’t happen with some regularity in the industry, it wouldn’t be fodder for Chad’s predictable jokes and complaints.

The Height of Unprofessionalism

Let’s now forget Chad.  He’s probably off somewhere telling everyone how clueless the VPs are, anyway.

Most of the groups that you’ll work with as a software pro would recoil in horror at a deliberate strategy of using your users as testers.  They work for months or years implementing the initial release and then subsequent features.  The company spends millions on their salaries and on the software.  So to toss it to the users and say “you find our mistakes” marks the height of unprofessionalism.  It’s sloppy.

Your pride and your organization’s professional reputation call for something else.  You build the software carefully, testing as you go.  You put it through the paces, not just with unit and acceptance tests, but with a whole suite of smoke tests, load tests, stress tests and endurance tests.  QA does exploratory testing.  And then, with all of that complete, you test it all again.

Only after all of this do you release it to the wild, hoping that defects will be rare.  The users receive a polished product of which you can be proud — not a rough draft to help you sort through.

Users as Testers Reconsidered

But before we simply accept that as the right answer and move on, let’s revisit the nature of these groups.  As I mentioned, the company spends millions of dollars building this software.  This involves hiring a team of experienced and proud professionals, among other things.  Significant time, money, and company stake go into this effort.

If you earn a living as a salaried software developer, your career will involve moving from one group like this to another.   In each of these situations, anything short of shipping a polished product smacks of failure.  And in each of these situations, you’ll encounter a Chad, accusing the company of just such a failure.

But what about other situations?  Should enlisting users as testers always mean a failure of due diligence?  Well, no, I would argue.  Sometimes it’s a perfectly sound business or life decision.

Read More

By

Deploying Guerrilla Tactics to Combat Stupid Tech Interviews

I’ve realized something about my situation.  I work for myself, building businesses and still, occasionally, consulting at times.  But of course that’s not news to me.  Nor is the fact that I’ve moved out to a quiet, remote place where I wear T-shirts exclusively, fish a lot, work when I feel like it from a room in my house, and often cook dinners over a fire in my backyard.  The realization came from marinating in that lifesyle for a while, and then noticing that I have absolutely no reason to pull any punches with my opinions.  No affiliations, no politics, no optics to manage.  So why not have some fun expressing those opinions, provocative or not, as DaedTech posts?

Today, I’d like to take on the subject of tech interviews.  Of course, talking about the deeply flawed hiring process isn’t new for this blog.  But I’m going to take it a step further by suggesting how we, as individuals, can try to fight back against Big Tech Interview.

The seed for this came from an idle internet clicking sequence that brought me to a blog.  The company to whom the blog belongs, Byte by Byte, offers the motto, “your one stop shop for acing your coding interview.”  Below that, it says, “master the coding interview game”  (emphasis mine).  It struck me then.  Yes, of course.  It really, truly is a game, and a stupid one at that.  But let me come back to the cottage industry of Princeton Review for tech companies later.

The History of the Job Interview

For this history, I’ll offer an excerpt from my book, Developer Hegemony, describing the history of the job interview in general.

In 1921, tired of hiring college graduates that didn’t know as much as he did, Thomas Edison made up a giant trivia questionnaire to administer to inbound applicants. According to Mental Floss, questions included “Who invented logarithms?” and “Why is cast iron called Pig Iron?” If you look at the sorts of questions that modern day tech companies seem to think they’re cute for asking, courtesy of cio.com, they include such profundities as “Why is the Earth round?” and “How much do you charge to wash every window in Seattle?” If you mixed Edison’s and tech companies’ questions together, you’d be hard pressed to tell the difference.

To summarize, almost 100 years ago, an aging, eccentric, and incredibly brilliant inventor decided one day that he didn’t like hiring kids that weren’t his equals in knowledge. He devised a scheme off the cuff to indulge his preference and we’re still doing that exact thing about a century later. But was it at least effective in Edison’s day? Evidently not. According to the Albert Einstein archives, Albert Einstein would not have made the cut. So the biggest, trendiest, most forward thinking tech companies are using a scheme that was dreamed up on a whim and was dead on arrival in terms of effectiveness.

But surely it’s evolved somehow. Right? Well, no, at least not in any meaningful way. In this piece from Business Insider about the “evolution” of the job interview, we can see that what’s actually changed is the media for asking dumb trivia questions. In Edison’s day, interviewers had to get cute face to face. Now they can do it over the phone, through a computer screen or even via a mobile app. Who knows what the future will hold for the job interview; they may be able to beam the stupid directly into your cerebral cortex!

Google Looks Critically at Tech Interviews

In the book, I cover a lot more ground than I can or will here.  I lay out a case for how uniquely pernicious this interview process is for tech.  It artificially depresses software developers’ wages and manufactures job scarcity in a market where demand for our labor is absolutely incredible.  But let’s seize on a different point for this particular post.

I have specific styles of modern tech interviews in my sights as worse than others.  Specifically, the whiteboard interview, the trivia/brain-teaser interview, and the “Knuth Fanatic,” algorithm-obsessed interview.  These serve mainly to make the interviewer feel smart, rather than to reveal anything about candidates.  But don’t take it from me.  Laszlo Bock, former head of Google HR, said this:

On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.

And also this:

Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess.

Read More

By

Pair Programming Benefits: The Business Rationale

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, have a look at their Retrace product that consolidates all of your production monitoring needs into one tool.

During the course of my work as a consultant, I wind up working with many companies adopting agile practices, most commonly following Scrum.  Some of these practices they embrace easily, such as continuous integration.  Others cause some consternation.  But perhaps no practice furrows more brows in management than pair programming.  Whatever pair programming benefits they can imagine, they always harbor a predictable objection.

Why would I pay two people to do one job?

Of course, they may not state it quite this bluntly (though many do).  They may talk more generally in terms of waste and inefficiency.  Or perhaps they offer tepid objections related to logistical concerns.  Doesn’t each requirement need one and only one owner?  But in almost all cases, it amounts to the same essential source of discomfort.

I believe this has its roots in early management theories, such as scientific management.  These gave rise to the notion of workplaces as complex systems, wherein managers deployed workers as resources intended to perform tasks repetitively and efficiently.  Classic management theory wants individual workers at full utilization.  Give them a task, have them specialize in it, and let them realize efficiency through that specialty.

Knowledge Work as a Wrinkle

Historically, this made sense.  And it made particular sense for manufacturing operations with global focus.  These organizations took advantage of hyper-specialty to realize economies of scale, which they parlayed into a competitive advantage.

But fast forward to 2017 and think of workers writing software instead of assembling cars.  Software developers do something called knowledge work, which has a much different efficiency profile than manual labor.  While you wouldn’t reasonably pay two people to pair up operating one shovel to dig a ditch, you might pay them to pair up and solve a mental puzzle.

So while the atavistic aversion to pairing makes sense given our history, we should move past that in modern software development.

To convince reticent managers to at least hear me out, I ask them to engage in a thought exercise.  Do they hire software developers based on how many words per minute they can type?  What about how many lines of code per hour they can crank out?  Neither of these things?

These questions have obvious answers.  After I hear those answers, I ask them to concede that software development involves more thinking than typing.  Once they concede that point, the entrenched idea of attacking a problem with two people as wasteful becomes a little less entrenched.  And that’s a start.

Read More

By

Why Production Monitoring Can Come Too Late

Editorial Note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, have a look around at how their offering can help you hunt down issues from development to production.

I’ve spent a number of years, now, writing software.  At the risk of dating myself, I worked on software in the early 2000s.  Back then, you couldn’t take quite as much for granted.  For example, while organizations considered source control a good practice, forgoing it wouldn’t have constituted lunacy the way it does today.

As a result of the different in standards, my life shipping software looked different back then.  Only avant garde organizations adopted agile methodologies, so software releases happened on the order of months or years.  We thus reasoned about the life of software in discrete phases.  But I’m not talking about the regimented phases of the so-called “waterfall” methodology.  Rather, I generalize it to these phases: build, prep, run.

During build, you mainly solved the problem of cranking through the requirements as quickly as possible.  Next up, during prep, you took this gigantic sprawl of code that only worked on dev machines, and started to package it into some kind of deployable product.  This might have meant early web servers or even CDs at the time.  And, finally, came run.  During run phase, you’d maintain vigilance, waiting for customer issues to come streaming in.

Bear in mind that we would, of course, work to minimize bugs and issues during all of these phases.  But at that time with most organizations, having issues during the “run phase” constituted a good problem to have.  After all, it meant you had reached the run phase.  A shocking amount of software never made it that far.

Monitoring and Software Maturity

We’ve come a long way.  As I alluded to earlier, you’d get some pretty incredulous looks these days for not using source control.  And you would likewise receive incredulous looks for a release cycle spanning years, divided into completely disjoint phases.  Relatively few shops view their applications’ production behavior as a hypothetical problem for a far-off date anymore.

We’ve arrived at this point via some gradual, hard-won victories over the years.  These have addressed the phases I mentioned and merged them together.  Organizations have increasingly tightened the feedback loop with the adoption of agile methodologies.  Alongside that, vastly improved build and deployment tooling has transformed “the build” from “that thing we do for weeks at the end” to “that thing that happens with every commit.”  And, of course, we’ve gotten much, much better at supporting software in production.

Back in the days of shrink-wrap software and shipping CDs, users reported problems via phone call.  For a solution, they developed workarounds and waited for a patch CD in the mail.  These days, always-connected devices allow for patches with arbitrary quickness.  And we have software that gets out in front of production issues, often finding them even before users do.

Specifically, we now have sophisticated production monitoring software.  In some cases, this means simply watching for outages and supplying alerts.  But we also have sophisticated application performance monitoring (APM) capabilities.  As I said, we’ve come a long way.

Read More

By

Transitioning from Manual to Automated Code Review

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right.

I can almost sense the indignation from some of you.  You read the title and then began to seethe a little.  Then you clicked the link to see what kind sophistry awaited you.  “There is no substitute for peer review.”

Relax.  I agree with you.  In fact, I think that any robust review process should include a healthy amount of human and automated review.  And, of course, you also need your test pyramid, integration and deployment strategies, and the whole nine yards.  Having a truly mature software shop takes a great deal of work and involves standing on the shoulders of giants.  So, please, give me a little latitude with the premise of the post.

Today I want to talk about how one could replace manual code review with automated code review only, should the need arise.

Why Would The Need for This Arise?

You might struggle to imagine why this would ever prove necessary.  Those of you with many years logged in the enterprise in particular probably find this puzzling.  But you might find manual code inspection axed from your process for any number of reasons other than, “we’ve decided we don’t value the activity.”

First and most egregiously, a team’s manager might come along with an eye toward cost savings.  “I need you to spend less time reading code and more time writing it!”  In that case, you’ll need to move away from the practice, and going toward automation beats abandoning it altogether.  Of course, if that happens, I also recommend dusting off your resume.  In the first place, you have a penny-wise, pound-foolish manager.  And, secondly, management shouldn’t micromanage you at this level.  Figuring out how to deliver good software should be your responsibility.

But let’s consider less unfortunate situations.  Perhaps you currently work on a team of 2, and number 2 just handed in her two week’s notice.  Even if your organization back-fills your erstwhile teammate, you have some time before the newbie can meaningfully review your code.  Or, perhaps you work for a larger team, but everyone gradually becomes so busy and fragmented in responsibility as not to have the time for much manual peer review.

In my travels, this last case actually happens pretty frequently.  And then you have to chose: abandon the practice altogether, or move toward an automated version.  Pretty easy choice, if you ask me.

Read More