DaedTech

Stories about Software

By

Why Production Monitoring Can Come Too Late

Editorial Note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, have a look around at how their offering can help you hunt down issues from development to production.

I’ve spent a number of years, now, writing software.  At the risk of dating myself, I worked on software in the early 2000s.  Back then, you couldn’t take quite as much for granted.  For example, while organizations considered source control a good practice, forgoing it wouldn’t have constituted lunacy the way it does today.

As a result of the different in standards, my life shipping software looked different back then.  Only avant garde organizations adopted agile methodologies, so software releases happened on the order of months or years.  We thus reasoned about the life of software in discrete phases.  But I’m not talking about the regimented phases of the so-called “waterfall” methodology.  Rather, I generalize it to these phases: build, prep, run.

During build, you mainly solved the problem of cranking through the requirements as quickly as possible.  Next up, during prep, you took this gigantic sprawl of code that only worked on dev machines, and started to package it into some kind of deployable product.  This might have meant early web servers or even CDs at the time.  And, finally, came run.  During run phase, you’d maintain vigilance, waiting for customer issues to come streaming in.

Bear in mind that we would, of course, work to minimize bugs and issues during all of these phases.  But at that time with most organizations, having issues during the “run phase” constituted a good problem to have.  After all, it meant you had reached the run phase.  A shocking amount of software never made it that far.

Monitoring and Software Maturity

We’ve come a long way.  As I alluded to earlier, you’d get some pretty incredulous looks these days for not using source control.  And you would likewise receive incredulous looks for a release cycle spanning years, divided into completely disjoint phases.  Relatively few shops view their applications’ production behavior as a hypothetical problem for a far-off date anymore.

We’ve arrived at this point via some gradual, hard-won victories over the years.  These have addressed the phases I mentioned and merged them together.  Organizations have increasingly tightened the feedback loop with the adoption of agile methodologies.  Alongside that, vastly improved build and deployment tooling has transformed “the build” from “that thing we do for weeks at the end” to “that thing that happens with every commit.”  And, of course, we’ve gotten much, much better at supporting software in production.

Back in the days of shrink-wrap software and shipping CDs, users reported problems via phone call.  For a solution, they developed workarounds and waited for a patch CD in the mail.  These days, always-connected devices allow for patches with arbitrary quickness.  And we have software that gets out in front of production issues, often finding them even before users do.

Specifically, we now have sophisticated production monitoring software.  In some cases, this means simply watching for outages and supplying alerts.  But we also have sophisticated application performance monitoring (APM) capabilities.  As I said, we’ve come a long way.

Read More

By

Integrating APM into Your Testing Strategy

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, have a look at their tooling to help you with your APM needs.

Does your team have a testing strategy?  In 2017, I have a hard time imagining that wouldn’t at least have some kind of strategy, however rudimentary.  Unlike a couple of decades ago, you hear less and less about people just changing code on the production server and hoping for the best.

At the very least, you probably have a QA group, or at least someone who serves in that role prior to shipping your software.  You write the code, do something to test it, and then ship it once the testers bless it (or at least notate “known issues”).

From there, things probably run the gamut among those of you reading.  Some of you probably do what I’ve described and little more.  Some of you probably have multiple pre-production environments to which a continuous integration setup automatically deploys builds.  Of course, it only deploys those builds assuming all automated unit, integration and smoke tests pass and assuming that your static analysis doesn’t flag any show stopper issues.  Once deployed, a team of highly skilled testers perform exploratory testing.  Or, maybe, you do something somewhere in between.

But, whatever you do, you can always do more.  In fact, I encourage you always to look for new ways to test.  And today I’d like to talk about an idea for just such a thing.  Specifically, I think you can leverage application performance management (APM) software to help your testing efforts.  I say this in spite of the fact that most shops have traditionally taken advantage of these tools only in production.

Read More

By

Transitioning from Manual to Automated Code Review

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right.

I can almost sense the indignation from some of you.  You read the title and then began to seethe a little.  Then you clicked the link to see what kind sophistry awaited you.  “There is no substitute for peer review.”

Relax.  I agree with you.  In fact, I think that any robust review process should include a healthy amount of human and automated review.  And, of course, you also need your test pyramid, integration and deployment strategies, and the whole nine yards.  Having a truly mature software shop takes a great deal of work and involves standing on the shoulders of giants.  So, please, give me a little latitude with the premise of the post.

Today I want to talk about how one could replace manual code review with automated code review only, should the need arise.

Why Would The Need for This Arise?

You might struggle to imagine why this would ever prove necessary.  Those of you with many years logged in the enterprise in particular probably find this puzzling.  But you might find manual code inspection axed from your process for any number of reasons other than, “we’ve decided we don’t value the activity.”

First and most egregiously, a team’s manager might come along with an eye toward cost savings.  “I need you to spend less time reading code and more time writing it!”  In that case, you’ll need to move away from the practice, and going toward automation beats abandoning it altogether.  Of course, if that happens, I also recommend dusting off your resume.  In the first place, you have a penny-wise, pound-foolish manager.  And, secondly, management shouldn’t micromanage you at this level.  Figuring out how to deliver good software should be your responsibility.

But let’s consider less unfortunate situations.  Perhaps you currently work on a team of 2, and number 2 just handed in her two week’s notice.  Even if your organization back-fills your erstwhile teammate, you have some time before the newbie can meaningfully review your code.  Or, perhaps you work for a larger team, but everyone gradually becomes so busy and fragmented in responsibility as not to have the time for much manual peer review.

In my travels, this last case actually happens pretty frequently.  And then you have to chose: abandon the practice altogether, or move toward an automated version.  Pretty easy choice, if you ask me.

Read More

By

Things Everyone Forgets Before Committing Code

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, take a look around at some of the other posts as well.

Committing code involves, in a dramatic sense, two universes colliding.  Firstly, you have the universe of your own work and metaphorical workbench.  You’ve worked for some amount of time on your code, hopefully in a state of flow.  And secondly, you have the universe of the team’s communal work product.  And so when you commit, you force these universes together by foisting your recent work on the team.

In bygone years, this created far more heartburn for the average team than it does today.  Barbaric as it may seem, I can actually remember a time when some professional software developers didn’t use source control.  A “commit” thus involved literally overwriting a file on a shared drive, obliterating all trace of the previous version.  (Sometimes, you might create a backup copy of the folder).  Here, your universe actually kind of ate the team’s communal universe.

More Frequent Commits, Fewer Problems

But, even in the earliest days of my career, lack of source control represented sloppy process.  I remember installing the practice in situations that lacked it.  But even with source control in place, people tended to go off and code in their own world for weeks or even months during feature development.  Only when release time neared did they start to have what the industry affectionately calls “merge parties,” wherein the team would spend days or weeks sorting out all of the instances where their changes trampled one another’s.

In the interceding years, the industry has learned the wisdom of continuous integration (CI).  CI builds on the premise, “if it hurts, do it more,” by encouraging frequent, lower stakes commits.  These days, most teams commit on the order of hours, rather than weeks or months.  This significantly lowers the onerousness of universes colliding.

But it doesn’t eliminate the problem altogether, even in teams that live the CI dream.  No matter how frequently you do it and how sophisticated the workflows around modern source control, you still have the basic problem of putting your stuff into the team’s universe.  And this comes with the metaphorical risk of leaving your tools laying around where someone can trip over them.

So today, let’s take a look at some of the most common things everyone forgets before committing code.  And, for the purposes of the post, I’ll remain source control agnostic, with the parlance “commit” meaning generally to sync your files with the team’s.

Read More

By

Static Analysis Issue Management Gets a Boost

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, have a look at the features in NDepend’s latest version.

Years ago, I led a team of software developers.  We owned an eclectic portfolio of software real estate.  It included some Winforms, Webforms, MVC, and even a bit of WPF sprinkled into the mix.  And, as with any eclectic neighborhood, the properties came in a variety of ages and states of repair.

Some of this code depended on a SQL Server database that had a, let’s just say, casual relationship with normalization.  Predictably, this caused maintenance struggles.  But, beyond that, it caused a credibility gap when we spoke to non-technical stakeholders.  “What do you mean you can’t give a definitive answer to how many sales we made last year?”  “Well,” I’d try to explain, “I can’t say for sure because the database doesn’t explicitly define the concept of a sale.”

Flummoxed by the mutual frustration, I tried something a bit different.  Since I couldn’t easily explain the casual, implied relationships in the database, I decided to do a show and tell.  First, I went out and found a static analyzer for database schema.  Then, I brought in some representative stakeholders and said, “watch this.”  With a flourish (okay, not really), I turned the analyzer loose on the schema.

While they didn’t grok my analogies, the tens of thousands of warnings and errors made an impression.  In fact, it sort of terrified them.  But this did bridge the credibility gap and show them that we all had some work to do.  Mission accomplished.

Static Analyzer Issues

I engaged in something of a relationship hack with my little ploy.  You see, I know how this static analyzer would behave because I know how all of them tend to behave.  They earn their keep by carpet bombing your codebase with violations and warnings.  Out of the box, they overwhelm, and then they leave it to you to dial it back.  Truly, you can take this behavior to the bank.

So I knew that this creaky database would trigger thousands upon thousands of violations.  And then I just sat back waiting for the “magic” to happen.

I mention all of this to paint a picture of how static analyzers typically regard the concept of “issue.”  All categories of severity and priority generally roll up into this catch-all term, and it then refers to the itemized list of everything.  Your codebase has issues and it has lots of them.  This is how the tool earns its mind share and keep — by proving how much it can surface, and then doing so.

Thus you might define the concept simply as “all that stuff the static analyzer finds.”

Read More