The Importance of Performance on the Development Side
Editorial note: I originally wrote this post for the Monitis blog. You can check out the original here, at their site. While you’re there, take a look at their offering for monitoring your software in production.
In the software development world, we pay a ton of attention to the performance of our code in production. And rightfully so. A slow site can make you hemorrhage money in a variety of ways.
To combat this, the industry has brought some of its most impressive thinking to bear. The entire DevOps movement focused on bringing things to production more efficiently and then managing it more efficiently. Modern software development emphasizes this, staffs for it, and invests in it.
But looking beyond that, we leverage powerful tools. Monitoring solutions let us get out in front of problems before users can discover them. Alerting solutions allow us to configure those solutions to happen more effectively. Entire organizations and industries has sprung up around creating seamless SaaS experiences for end users.
But in spite of all this, I’ve found that we have a curious blind spot as an industry.
A Tale of Misery
Not too long ago, I sat in a pub having a beer, waiting on my food to arrive. With me sat a colleague that works for a custom app dev agency. In this capacity, he visits clients and works on custom software solutions in their environments.
Over our beers, he described their environment and work. They had some impressive development automation. This included a continuous integration setup, gated builds, and push-button or automated deployments to various environments. And once things were in production, they had impressive instrumentation for logging, tracking, and alerting about potential issues.
I told him that he was lucky. He countered in a way that caught me a bit off guard. “Yeah, it’s impressive, but actually working there as a developer is pretty rough.”
I asked him to clarify, so he did. He told me about having to use slow, old development machines. He talked about unit test suite runs taking forever locally and even longer in the build environments. They had a lot of best practices in place, but actually getting anything done was really hard. Even the source control server was flaky, sometimes kicking back attempted commits and creating issues.
This struck me as a fascinating contrast.
Beware Development Costs
When we talk about production and deployment automation, we think in terms of operational concerns and costs. In other words, we think about the operation of our software in production.
But we can zoom out a little and think of operational cost from the whole business perspective. Shopify defines business operations as, “everything that happens within a company to keep it running and earning money.” If the company writes software, this includes writing software.
So, in a very real sense, my colleague was living an operations problem. While the software might deploy effectively and run well in production, the operational piece of creating that software had become a nightmare. Development was woefully inefficient, creating a lot of unneeded cost.
In my travels consulting, I actually see this ironic situation more than you’d think. Let’s take a look at some ways that software development outfits bleed money (and developer morale) without necessarily realizing it.
Inefficient Source Control
If you work somewhere that uses Github enterprise to manage source control workflow, count yourself lucky. You may struggle to understand the plight of the enterprise software.
Not all enterprises struggle the same way, but most of them seem to struggle, and mightily. Walk into one of these places, and you might see their source control of choice and think, “Really? That’s still a thing in 2017?”
You get lightning-fast commits, conflict-minimizing branch/merge strategies, and the easy ability to work around times of non-connectivity. Not everyone else does, though. Some people still wrestle with mulit-hour checkins and multi-day “merge parties,” and they may hoard untracked changes to avoid the tool.
Not surprisingly, this has a devastating effect on team productivity. Lost work, frustration, and slowness take their toll.
Development Environments on the Cheap
This one has boggled my mind for years. The developers want two or three monitors, but get only a small, standard-issue one. They want 16 gigs of memory, but instead get only eight or even four with which to run the company-mandated, heavyweight, customized IDE. They want a slick diff tool, but nope. I could go on and on.
Developers, as a population, tend to love gadgets, tech, and toys. I get why you might not hand them a blank check and tell them to go nuts. But at the same time, I cannot fathom why anyone would skimp on things that let them develop software more efficiently.
If you assume a going market rate of $100 per hour for custom app dev, the folly of this becomes an apparent. An monitor need only save an hour of work over the lifetime of a project to pay for itself. Same thing for a nice diff tool or editor. For the extra memory, maybe you’re talking about two or three hours per developer. But if you invest in something that cuts the build time significantly, you’ll probably save that amount in a day or two.
Please, please invest in development environments. Dollar for dollar, this will be one of your best investments ever.
Slow Builds and Environments
The last inefficiency I’ll mention follows a similar trajectory, but with higher stakes. I’m talking about development build machines and environments for development, test, and sandbox.
I’ve encountered a tendency in cost-focused thinking to “save” money here. I believe the line of reasoning is, “Let’s spend the big money on prod and our users, and we’ll be fine with second tier stuff.” This happens most readily in environments where leadership doesn’t recognize the value in having this type of promotion automation in the first place. “Just use the old Exchange server that we retired last month.”
When that happens, they don’t necessarily see all of the idle time that ensues. Developers commit code, which prompts the build machine to wheeze along for minutes or hours. Hard to do anything during that time without feedback about whether or not you broke the build. So they wait, in a way that XKCD once made famous.
In aggregate, slow development and testing infrastructure becomes a massive bottleneck for the team. The amount of waste that you can incur here is staggering.
Fixing the Situation
How can organizations fix this situation? How can they make it so that people like my colleague can do their work more efficiently and without getting depressed?
Well, it starts first with a mindset shift. Recognize that development costs can be streamlined and minimized, just like any other operational costs. Do some basic ROI calculations on the money invested in software development, and let the developers help you with this.
But you should also define and measure metrics just as you would with production software. How long do builds take, both on the build machine and locally? How much time goes toward resolving merge conflicts? Find a way to capture this data and then to keep an eye on it. And then, if you notice troubling trends, make sure to address them quickly, just as you would with a production app.
Measuring, monitoring, processing, and adapting define our increasingly data-driven approach to business. And that’s a great trend. Just make sure you apply the wisdom to all of your business operations.