DaedTech

Stories about Software

By

Make Alerting Apps Work for You

Editorial Note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, take a look at the monitoring solutions and integrations they have to offer.

Some years back, I worked as the CIO.  During my tenure, I had a head of IT support reporting to me.  He did his job quite well and had a commendable sense of duty and responsibility, and I will always think of him as a model employee.

I recall an oddly frustrating conversation that I had with him once, however.  He struggled to explain what I needed to know, and I struggled to get him to understand the information I needed.

Long story short, he wanted me to sign off on switching data centers to a more expensive vendor.  Trouble was, this switch would have put us over budget, so I would have found myself explaining this to the CFO at the next executive meeting.  I needed something to justify the request, and that was what I sought.

I kept asking him to make a business case for the switch, and he kept talking about best practices, SLAs, uptime, and other bits of shop.  Eventually, I framed it almost as a mad lib.  If we don’t make this change, the odds of a significant outage that costs us $_____ will increase by _____%.  In that case, we stand to recoup this investment in _____ months.   In the end, he understood.  He built the business case, I took it to the executive meeting, and we made the improvements.

As much as we might like it, people in technical leadership position often cannot get into the weeds when talking shop.  If this seems off-putting, to techies, I’d say think of it this way.  Techies hack tools, code, and infrastructure, while managers and leaders hack the business.

Read More

By

What Metrics Should the CIO See?

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, give NDepend a try — download it and see if your code falls in the dreaded Zone of Pain.

I’ve worked in the programming industry long enough to remember a less refined time.  During this time, the CIO (or CFO, since IT used to report to the CFO in many orgs) may have counted lines of code to measure the productivity of the development team.  Even then, they probably understood the folly of such an approach.  But, if they lacked better measures, they might use that one.

Today, you rarely, if ever see that happen any longer.  But don’t take that to mean reductionist measures have stopped.  Rather, they have just evolved.

Most commonly today, I see this crop up in the form of automated unit test coverage.  A CIO or high level manager becomes aware of generally quality and cadence problems with the software.  She may consult with someone or read a study and conclude that a robust, automated test suite will cure what ails her.  She then announces the initiative and rolls out.  Then, she does the logical thing and instruments her team’s process so that she can track progress and improvement with the testing initiative.

The problem with this arises from what, specifically, the group measures and improves.  She wants to improve quality and predictability, so she implements a proxy solution.  She then measures people against that proxy.  And, often, they improve… against that proxy.

If you measure your organization’s test coverage and hold them accountable, you can rest assured that they will improve test coverage.  Improved quality, however, remains largely an orthogonal concern.

The CIO’s Leaky Abstraction

The issue here stems from what I might call a leaky organizational abstraction.  If the CIO came from a software development background, this gets even more thorny.

Consider that a CIO or high level manager generally concerns himself with organizational strategy.  He approves and monitors budgets, signs off on major initiatives, decides on the fate of applications in the application portfolio, etc.  The CIO, in other words, makes business decisions that have a technical flavor.  He deals in profits, losses, revenues, expenses, and organizational politics.

Through that lens, he might look at quality problems across the board as hits to the company’s reputation or drags on the bottom line.  “We’re losing subscribers due to these bugs that happen at each roll out.  We estimate that we lose $10,000 more each month in revenue.”  He would then pull the trigger on business solutions: hiring consultants to fix this problem, realigning his org chart, putting off milestones to focus on quality, etc.

But if he dives into the weeds, he’s shedding a business person’s hat for a techie’s.  “Move over architects,” he says, “I know how you can fix this at the line level.  I call it ‘automated test coverage’ and I order you to start doing it.”  In a traditionally organized corporate structure, the CIO begins doing the job of folks in his organization at his peril.

Read More

By

How to Write Code that Operations Will Like

Editorial Note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, take a look around at the options you have for monitoring your production site and all of its supporting infrastructure.

In recent years, the DevOps movement has gained a significant amount of steam.  Historically, organizations approached software creation and maintenance in the same way that they might with physical, mechanical process such as manufacturing.  This meant carving the work into discrete components and then asking people to specialize in those components.

If you picture a factory floor, it becomes easy to picture.  One person screws two components together over and over, while another person operates a drill press over and over.  When you specialize this way, the entire process gains in efficiency.  At least, so it goes with mechanical processes.

With knowledge work, we haven’t realized the same kind of benefit.  In retrospect, this makes sense.  It makes sense because, unlike mechanical work, knowledge work involves little true repeatability.  It continually presents us with problems that differ at least somewhat.

As we’ve come to recognize that fact, we’ve begun to blend concerns together.  Scrum teams blur the lines among QA, developers, and business analysts.  And DevOps does the same with initial development and production operation of software.  If you can, I wholeheartedly encourage you to embrace DevOps and to blend these concerns.  Your entire work product will improve.

Unfortunately, not every shop can do this — at least not immediately.  Progress goes slowly in these places because of regulatory and compliance concerns or perhaps because change can come slowly in the enterprise.  But that shouldn’t stop you from making strides.

Even if development and operations remain separate, they can at least communicate and help each other.  Today, I’d like to talk about how developers can write code that makes life easier for ops folks.

Read More

By

CodeIt.Right Rules Explained, Part 2

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right to help with automated code review.

A little while back, I started a post series explaining some of the CodeIt.Right rules.  I led into the post with a narrative, which I won’t retell.  But I will reiterate the two rules that I follow when it comes to static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

Because I follow these two rules, I find myself researching every fix suggested to me by my tooling.  And, since I’ve gone to the trouble of doing so, I’ll save you that same trouble by explaining some of those rules today.  Specifically, I’ll examine 3 more CodeIt.Right rules today and explain the rationale behind them.

Mark assemblies CLSCompliant

If you develop in .NET, you’ve no doubt run across this particular warning at some point in your career.  Before we get into the details, let’s stop and define the acronyms.  “CLS” stands for “Common Language Specification,” so the warning informs you that you need to mark your assemblies “Common Language Specification Compliant” (or non-compliant, if applicable).

Okay, but what does that mean?  Well, you can easily forget that many programming languages target the .NET runtime besides your language of choice.  CLS compliance indicates that any language targeting the runtime can use your assembly.  You can write language specific code, incompatible with other framework languages.  CLS compliance means you haven’t.

Want an example?  Let’s say that you write C# code and that you decide to get cute.  You have a class with a “DoStuff” method, and you want to add a slight variation on it.  Because the new method adds improved functionality, you decide to call it “DOSTUFF” in all caps to indicate its awesomeness.  No problem, says the C# compiler.

And yet, if you you try to do the same thing in Visual Basic, a case insensitive language, you will encounter a compiler error.  You have written C# code that VB code cannot use.  Thus you have written non-CLS compliant code.  The CodeIt.Right rule exists to inform you that you have not specified your assembly’s compliance or non-compliance.

To fix, go specify.  Ideally, go into the project’s AssemblyInfo.cs file and add the following to call it a day.

But you can also specify non-compliance for the assembly to avoid a warning.  Of course, you can do better by marking the assembly compliant on the whole and then hunting down and flagging non-compliant methods with the attribute.

Specify IFormatProvider

Next up, consider a warning to “specify IFormatProvider.”  When you encounter this for the first time, it might leave you scratching your head.  After all, “IFormatProvider” seems a bit… technician-like.  A more newbie-friendly name for this warning might have been, “you have a localization problem.”

For example, consider a situation in which some external supplies a date.  Except, they supply the date as a string and you have the task of converting it to a proper DateTime so that you can perform operations on it.  No problem, right?

That should work, provided provincial concerns do not intervene.  For those of you in the US, “03/02/1995” corresponds to March 2nd, 1995.  Of course, should you live in Iraq, that date string would correspond to February 3rd, 1995.  Oops.

Consider a nightmare scenario wherein you write some code with this parsing mechanism.  Based in the US and with most of your customers in the US, this works for years.  Eventually, though, your sales group starts making inroads elsewhere.  Years after the fact, you wind up with a strange bug in code you haven’t touched for years.  Yikes.

By specifying a format provider, you can avoid this scenario.

Nested types should not be visible

Unlike the previous rule, this one’s name suffices for description.  If you declare a type within another type (say a class within a class), you should not make the nested type visible outside of the outer type.  So, the following code triggers the warning.

To understand the issue here, consider the object oriented principle of encapsulation.  In short, hiding implementation details from outsiders gives you more freedom to vary those details later, at your discretion.  This thinking drives the rote instinct for OOP programmers to declare private fields and expose them via public accessors/mutators/properties.

To some degree, the same reasoning applies here.  If you declare a class or struct inside of another one, then presumably only the containing type needs the nested one.  In that case, why make it public?  On the other hand, if another type does, in fact, need the nested one, why scope it within a parent type and not just the same namespace?

You may have some reason for doing this — something specific to your code and your implementation.  But understand that this is weird, and will tend to create awkward, hard-to-discover code.  For this reason, your static analysis tool flags your code.

Until Next Time

As I said last time, you can extract a ton of value from understanding code analysis rules.  This goes beyond just understanding your tooling and accepted best practice.  Specifically, it gets you in the habit of researching and understanding your code and applications at a deep, philosophical level.

In this post alone, we’ve discussed language interoperability, geographic maintenance concerns, and object oriented design.  You can, all too easily, dismiss analysis rules as perfectionism.  They aren’t; they have very real, very important applications.

Stay tuned for more posts in this series, aimed at helping you understand your tooling.

By

The Fastest Way to Get to Know NDepend

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and, well, get to know it.

I confess to a certain level of avoidance when it comes to tackling something new.  If pressed for introspection, I think I do this because I can’t envision a direct path to success.  Instead, I see where I am now, the eventual goal, and a big uncertain cloud of stuff in the middle.  So I procrastinate by finding other things that need doing.

Sooner or later, however, I need to put this aside and get down to business.  For me, this usually means breaking the problem into smaller problems, identifying manageable next actions, and tackling those.  Once things become concrete, I can move methodically.  (As an aside this is one of many reasons that I love test driven development — it forces this behavior.)

When dealing with a new product or utility that I have acquired, this generally means carving out a path toward some objective and then executing.  For instance, “learn Ruby” as a goal would leave me floundering.  But “use Ruby to build a service that extracts data via API X” would result in a series of smaller goals and actions.  And I would learn via those goals.

For NDepend, I have a recommendation along these lines.  Let’s use the tool to help you visualize your the reality of your codebase better than anyone around you.  In doing this, you will get to know NDepend quickly and without feeling overwhelmed.

Running Analysis

First things first.  Before you can do much of anything with NDepend, you need to set it up to analyze your codebase.  You can get here with a pretty simple sequence of steps.

  • Download NDepend and unpack it.
  • Run the Visual Studio Extension installer and install the extension for your preferred version.
  • Launch Visual Studio and open the solution you want to visualize.
  • Attach a new NDepend project to your solution, selecting the assemblies you’d like to analyze and then clicking “analyze” (leave the “build report” checkbox checked).

That’s all there is to getting started and running an analysis.  I won’t go into more detail here, as NDepend has good online documentation and the nuts and bolts are beyond the scope of what I want to cover.  But you’ll find it fairly easy to get going here on your own, and to get your code analyzed.

Read More