DaedTech

Stories about Software

By

The Different Pair Programming Styles

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, check out their products that help you wrangle production issues in a hurry.

The world of professional programming produces some pretty intense debates.  For example, take a look at discussions about whether and how to comment code.  We have a hard time settling such debates because studying professional programming scientifically is hard.  We can’t really ask major companies to build the same software twice, using one control group and one experimental group.  So we muddle through with lots of anecdotes and opinions and relatively scant empirical data.  Because of this conundrum, I want to talk today about pair programming styles rather than taking a stance on whether you should pair program or not.

I’ve talked previously about the benefits of pair programming from the business’s perspective.  But I concluded that post the same way that I’m introducing this one.  You can realize benefits, but you have to evaluate whether it makes sense for you or not.  To make a good evaluation, you should understand the different pair programming styles and how they work.

That’s right.  Pair programming involves more than just throwing two people together and telling them to go nuts.  Over the years, practitioners have developed techniques to employ in different situations.  Through practice and experimentation, they have improved upon and refined these techniques.

The Effect of Proficiency on Pair Programming Styles

Before looking at the actual protocols, let’s take a brief detour through the idea of varied developer skill levels.  Although we have a seemingly unique penchant for expressing our skill granularly, I’ll offer just two developer skill levels: novice and expert.  I know, I know.  But those two will keep complexity to a minimum and serve well for explaining the different pairing models.

With our two skill levels in mind, consider the three possible pairing combinations:

  • Expert-Expert
  • Expert-Novice
  • Novice-Novice

Now when I talk about expertise here, bear in mind that this accounts for context and not just general industry experience.  Tech stack, codebase familiarity, and even domain knowledge matter here.  I have two CS degrees and years of experience in several OOP languages.  But if I onboarded with your GoLang team tomorrow, you could put me safely in the novice camp until I got my bearings.

Each of these pairing models has its advantages and disadvantages.  Sometimes, however, fate may force your hand, depending on who is available.  Understanding the different pairing models will help you be effective when it does.  It also bears mentioning that novice-novice pairings offer a great deal of learning for both novices, but with risk.  Therefore, the suitability of such a pairing depends more on your appetite for risk than the pairing model.

Read More

By

In Defense of Using Your Users as Testers

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download a trial of NDepend and take it for a spin; you can try it for free.

In most shops of any size, you’ll find a person that’s just a little too cynical.  I’m a little cynical myself, and we programmers tend to skew that way.  But this guy takes it one step further, often disparaging the company in ways that you think must be career-limiting.  And they probably are, but that’s his problem.

Think hard, and some man or woman you’ve worked with will come to mind.  Picture the person.  Let’s call him Cynical Chad. Now, imagine Chad saying, “Testing? That’s what our users are for!”  You’ve definitely heard someone say this at least once in your career.

This is an oh-so-clever way to imply that the company serially skimps on quality.  Maybe they’re always running behind a too-ambitious schedule.  Or perhaps they don’t like to spend the money on testing.  I’m sure Chad would be happy to regale you with tales of project manager and QA incompetence.  He’ll probably tell you about your own incompetence too, if you get a couple of beers in him.

But behind Chad’s casual maligning of your company lies a real phenomenon.  With their backs against the wall, companies will toss things into production, hope for the best, and rely on users to find defects.  If this didn’t happen with some regularity in the industry, it wouldn’t be fodder for Chad’s predictable jokes and complaints.

The Height of Unprofessionalism

Let’s now forget Chad.  He’s probably off somewhere telling everyone how clueless the VPs are, anyway.

Most of the groups that you’ll work with as a software pro would recoil in horror at a deliberate strategy of using your users as testers.  They work for months or years implementing the initial release and then subsequent features.  The company spends millions on their salaries and on the software.  So to toss it to the users and say “you find our mistakes” marks the height of unprofessionalism.  It’s sloppy.

Your pride and your organization’s professional reputation call for something else.  You build the software carefully, testing as you go.  You put it through the paces, not just with unit and acceptance tests, but with a whole suite of smoke tests, load tests, stress tests and endurance tests.  QA does exploratory testing.  And then, with all of that complete, you test it all again.

Only after all of this do you release it to the wild, hoping that defects will be rare.  The users receive a polished product of which you can be proud — not a rough draft to help you sort through.

Users as Testers Reconsidered

But before we simply accept that as the right answer and move on, let’s revisit the nature of these groups.  As I mentioned, the company spends millions of dollars building this software.  This involves hiring a team of experienced and proud professionals, among other things.  Significant time, money, and company stake go into this effort.

If you earn a living as a salaried software developer, your career will involve moving from one group like this to another.   In each of these situations, anything short of shipping a polished product smacks of failure.  And in each of these situations, you’ll encounter a Chad, accusing the company of just such a failure.

But what about other situations?  Should enlisting users as testers always mean a failure of due diligence?  Well, no, I would argue.  Sometimes it’s a perfectly sound business or life decision.

Read More

By

Deploying Guerrilla Tactics to Combat Stupid Tech Interviews

I’ve realized something about my situation.  I work for myself, building businesses and still, occasionally, consulting at times.  But of course that’s not news to me.  Nor is the fact that I’ve moved out to a quiet, remote place where I wear T-shirts exclusively, fish a lot, work when I feel like it from a room in my house, and often cook dinners over a fire in my backyard.  The realization came from marinating in that lifesyle for a while, and then noticing that I have absolutely no reason to pull any punches with my opinions.  No affiliations, no politics, no optics to manage.  So why not have some fun expressing those opinions, provocative or not, as DaedTech posts?

Today, I’d like to take on the subject of tech interviews.  Of course, talking about the deeply flawed hiring process isn’t new for this blog.  But I’m going to take it a step further by suggesting how we, as individuals, can try to fight back against Big Tech Interview.

The seed for this came from an idle internet clicking sequence that brought me to a blog.  The company to whom the blog belongs, Byte by Byte, offers the motto, “your one stop shop for acing your coding interview.”  Below that, it says, “master the coding interview game”  (emphasis mine).  It struck me then.  Yes, of course.  It really, truly is a game, and a stupid one at that.  But let me come back to the cottage industry of Princeton Review for tech companies later.

The History of the Job Interview

For this history, I’ll offer an excerpt from my book, Developer Hegemony, describing the history of the job interview in general.

In 1921, tired of hiring college graduates that didn’t know as much as he did, Thomas Edison made up a giant trivia questionnaire to administer to inbound applicants. According to Mental Floss, questions included “Who invented logarithms?” and “Why is cast iron called Pig Iron?” If you look at the sorts of questions that modern day tech companies seem to think they’re cute for asking, courtesy of cio.com, they include such profundities as “Why is the Earth round?” and “How much do you charge to wash every window in Seattle?” If you mixed Edison’s and tech companies’ questions together, you’d be hard pressed to tell the difference.

To summarize, almost 100 years ago, an aging, eccentric, and incredibly brilliant inventor decided one day that he didn’t like hiring kids that weren’t his equals in knowledge. He devised a scheme off the cuff to indulge his preference and we’re still doing that exact thing about a century later. But was it at least effective in Edison’s day? Evidently not. According to the Albert Einstein archives, Albert Einstein would not have made the cut. So the biggest, trendiest, most forward thinking tech companies are using a scheme that was dreamed up on a whim and was dead on arrival in terms of effectiveness.

But surely it’s evolved somehow. Right? Well, no, at least not in any meaningful way. In this piece from Business Insider about the “evolution” of the job interview, we can see that what’s actually changed is the media for asking dumb trivia questions. In Edison’s day, interviewers had to get cute face to face. Now they can do it over the phone, through a computer screen or even via a mobile app. Who knows what the future will hold for the job interview; they may be able to beam the stupid directly into your cerebral cortex!

Google Looks Critically at Tech Interviews

In the book, I cover a lot more ground than I can or will here.  I lay out a case for how uniquely pernicious this interview process is for tech.  It artificially depresses software developers’ wages and manufactures job scarcity in a market where demand for our labor is absolutely incredible.  But let’s seize on a different point for this particular post.

I have specific styles of modern tech interviews in my sights as worse than others.  Specifically, the whiteboard interview, the trivia/brain-teaser interview, and the “Knuth Fanatic,” algorithm-obsessed interview.  These serve mainly to make the interviewer feel smart, rather than to reveal anything about candidates.  But don’t take it from me.  Laszlo Bock, former head of Google HR, said this:

On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.

And also this:

Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess.

Read More

By

Pair Programming Benefits: The Business Rationale

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, have a look at their Retrace product that consolidates all of your production monitoring needs into one tool.

During the course of my work as a consultant, I wind up working with many companies adopting agile practices, most commonly following Scrum.  Some of these practices they embrace easily, such as continuous integration.  Others cause some consternation.  But perhaps no practice furrows more brows in management than pair programming.  Whatever pair programming benefits they can imagine, they always harbor a predictable objection.

Why would I pay two people to do one job?

Of course, they may not state it quite this bluntly (though many do).  They may talk more generally in terms of waste and inefficiency.  Or perhaps they offer tepid objections related to logistical concerns.  Doesn’t each requirement need one and only one owner?  But in almost all cases, it amounts to the same essential source of discomfort.

I believe this has its roots in early management theories, such as scientific management.  These gave rise to the notion of workplaces as complex systems, wherein managers deployed workers as resources intended to perform tasks repetitively and efficiently.  Classic management theory wants individual workers at full utilization.  Give them a task, have them specialize in it, and let them realize efficiency through that specialty.

Knowledge Work as a Wrinkle

Historically, this made sense.  And it made particular sense for manufacturing operations with global focus.  These organizations took advantage of hyper-specialty to realize economies of scale, which they parlayed into a competitive advantage.

But fast forward to 2017 and think of workers writing software instead of assembling cars.  Software developers do something called knowledge work, which has a much different efficiency profile than manual labor.  While you wouldn’t reasonably pay two people to pair up operating one shovel to dig a ditch, you might pay them to pair up and solve a mental puzzle.

So while the atavistic aversion to pairing makes sense given our history, we should move past that in modern software development.

To convince reticent managers to at least hear me out, I ask them to engage in a thought exercise.  Do they hire software developers based on how many words per minute they can type?  What about how many lines of code per hour they can crank out?  Neither of these things?

These questions have obvious answers.  After I hear those answers, I ask them to concede that software development involves more thinking than typing.  Once they concede that point, the entrenched idea of attacking a problem with two people as wasteful becomes a little less entrenched.  And that’s a start.

Read More

By

Why Production Monitoring Can Come Too Late

Editorial Note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, have a look around at how their offering can help you hunt down issues from development to production.

I’ve spent a number of years, now, writing software.  At the risk of dating myself, I worked on software in the early 2000s.  Back then, you couldn’t take quite as much for granted.  For example, while organizations considered source control a good practice, forgoing it wouldn’t have constituted lunacy the way it does today.

As a result of the different in standards, my life shipping software looked different back then.  Only avant garde organizations adopted agile methodologies, so software releases happened on the order of months or years.  We thus reasoned about the life of software in discrete phases.  But I’m not talking about the regimented phases of the so-called “waterfall” methodology.  Rather, I generalize it to these phases: build, prep, run.

During build, you mainly solved the problem of cranking through the requirements as quickly as possible.  Next up, during prep, you took this gigantic sprawl of code that only worked on dev machines, and started to package it into some kind of deployable product.  This might have meant early web servers or even CDs at the time.  And, finally, came run.  During run phase, you’d maintain vigilance, waiting for customer issues to come streaming in.

Bear in mind that we would, of course, work to minimize bugs and issues during all of these phases.  But at that time with most organizations, having issues during the “run phase” constituted a good problem to have.  After all, it meant you had reached the run phase.  A shocking amount of software never made it that far.

Monitoring and Software Maturity

We’ve come a long way.  As I alluded to earlier, you’d get some pretty incredulous looks these days for not using source control.  And you would likewise receive incredulous looks for a release cycle spanning years, divided into completely disjoint phases.  Relatively few shops view their applications’ production behavior as a hypothetical problem for a far-off date anymore.

We’ve arrived at this point via some gradual, hard-won victories over the years.  These have addressed the phases I mentioned and merged them together.  Organizations have increasingly tightened the feedback loop with the adoption of agile methodologies.  Alongside that, vastly improved build and deployment tooling has transformed “the build” from “that thing we do for weeks at the end” to “that thing that happens with every commit.”  And, of course, we’ve gotten much, much better at supporting software in production.

Back in the days of shrink-wrap software and shipping CDs, users reported problems via phone call.  For a solution, they developed workarounds and waited for a patch CD in the mail.  These days, always-connected devices allow for patches with arbitrary quickness.  And we have software that gets out in front of production issues, often finding them even before users do.

Specifically, we now have sophisticated production monitoring software.  In some cases, this means simply watching for outages and supplying alerts.  But we also have sophisticated application performance monitoring (APM) capabilities.  As I said, we’ve come a long way.

Read More