DaedTech

Stories about Software

By

APIs and the Principle of Least Surprise

Editorial note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, have a look at some of the other authors for their blog.

I remember something pretty random about my first job.  In my cubicle, I had a large set of metal shelves that held my various and sundry programming texts.  And, featured prominently on that shelf, I had an enormous blue binder.

Back then, I spent my days writing drivers and custom Linux kernel modules.  I had to because we made use of a real time interface to write very precisely timed machine control code.  As you might imagine, a custom Linux kernel in 2005 didn’t exactly come with a high production quality video walking users through the finer points.  In fact, it came with only its version of the iconic Linux “man pages” for guidance.  These I printed out and put into the aforementioned blue binder.

I cannot begin to tell you how much I studied this blue binder.  I pored through it for wisdom and clues, feeling a sense of great satisfaction when I deciphered some cryptic function example.  This sort of satisfaction defined a culture, in fact.  You wore mastery of a difficult API as a badge of honor.  And, on the flip side, failure to master an API represented a failure of yours.

Death of “Manual Culture”

What a difference a decade makes.  No longer do we have battleship gray windows applications with dozens of menus and sub-menus, with hundreds of settings and thousands of “advanced settings”.  No longer do we consider reading a gigantic blue documentation binder to be a use of time.  And, more generally, no longer do we put the onus of navigating a learning curve on the user.  Instead, we look to lure users by making things as easy as possible.

Ten years ago, a coarse expression described people’s take on this responsibility.  I’ll offer the safe-for-work version: “RTM” or “Read the Manual.”  Ten years later, we have seen the death of RTM culture.  This applies to APIs and to user experiences in general.

Read More

By

A Look at the History of RDBMS

Editorial Note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, take a look at Monitis’s offering for all things related to website and networking monitoring.

If you had to pick a unifying technology to bring all developers together, then you could do worse than selecting the relational database.  Of course, no topic can truly unify all developers.  But most of us that have written code for any length of time, have at least dealt with a database in some capacity or another.

And, why not?  We could boil software down to two core components: data and behavior.  So, just as we all learn programming languages to express behavior, we also learn some means of recording and persisting our precious data.

When we put enough of this data together in some organized format, we have a database.  When we organize that database in a manner known as “relational,” we have a relational database.  And then, when we add functionality for managing and optimizing access to that relational data, we have a relational database management system (RDBMS).

No doubt you have some familiarity with these products.  They include such industry mainstays as Oracle, Microsoft’s SQL Server, PostgreSQL, and MySQL, among others.

In fact, they blend so seamlessly into the scenery that you can easily take them for granted.  But where did they come from and why?  And, how have they evolved over the years?  Today, let’s take a look back at the history of the RDBMS.

Read More

By

How to Analyze a Static Analyzer

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, take a look around at some of the other posts, and sign up for the RSS feed, if you’re so inclined.

First things first.  I really wanted to call this post, “who will analyze the analyzer,” because I fancy myself clever.  This title would have mirrored the relatively famous Latin question from Satires, “who will guard the guards themselves?”  But I suspect that the confusion I’d cause with that title would outweigh any appreciation of my cleverness.

So, without any literary references whatsoever, I’ll talk about static analyzers.  More specifically, I’ll talk about how you should analyze them to determine fitness for your purpose.

Before I dive into that, however, let’s do a quick refresher on the definition of static analyzer.  This stack overflow question nails it pretty well, right at the beginning of the accepted answer.

Analyzing code without executing it. Generally used to find bugs or ensure conformance to coding guidelines.

Succinctly put, Aaron, and just so.  Most of what we do with code tends to be dynamic analysis.  Whether through automated tests or manual running of the program, we fire it up and see what happens.  Static analyzers, on the other hand, look at the code and use it to make deductions.  These include both deductions about runtime behavior and about the codebase itself.

What’s Your Goal?

Why rehash the definition?  Well, because I want to underscore the point that you can do many different things with static analyzers.  Even if you just think of them as “that thing that complains at me about the Microsoft guidelines,” they cover a whole lot more ground.

As such, your first step in sizing up the field involves setting your own goals.  What do you want out of the tool?  Some of them focus exclusively on code quality.  Others target specific concerns, such as behavioral correctness or security.  Still others simply offer so-called “linting.”  Some do a mix of many things.

Lay out your goals and expectations.  Once you’ve done that, you will find that you’ve narrowed the field considerably.  From there, you can proceed with a more apples to apples comparison.

Read More

By

How to Write Test Cases

Editorial note: I originally this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, check out Stackify’s APM offering.

As I’ve mentioned before on this blog, I have a good bit of experience writing unit tests.  In fact, I’ve managed to parlay this experience into a nice chunk of my living.  This includes consulting, training developers, building courses, and writing books.  From this evidence, one might conclude that unit testing is in demand.

Because of the demand and driving interest, I find myself at many companies, explaining the particulars of testing to many different people.  We’d like some of testing magic here, please.  Help us boost our quality.

A great deal of earnest interest in the topic lays the groundwork for improvement.  But it also lays the groundwork for confusion.  When large groups of people set out to learn new things, buzzwords can get tossed around and meaning lost.

Against this backdrop, I can recall several different people at asking, “how should we/our people write good test cases?”  If you’re familiar with the terms at play more precisely, you might scratch your head at this question given my unit testing expertise.  A company brought me in to teach developers to write automated unit testing and someone is asking me a term loosely associated the QA group.  What gives?

But in fact, this really just begs the question, “what is a test case?”  And why might it vary depending on who writes it and how?

Read More

By

An Introduction to the Types of Cloud Computing

Editorial Note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, take a look around at some of their other authors and content.

The folks at Gartner have something awesome called the “hype cycle”.  The cycle contains a “peak of inflated expectations” and a “trough of disillusionment”.  So, that alone gives it a pretty significant amusement factor.

But beyond the amusement lies an important insight into our collective psychology.  Those of us working in tech work in a booming and constantly evolving industry.  Because of this, we find ourselves bombarded with buzzwords.  These generate excitement at first and disillusionment later.  Eventually, they reach equilibrium.

Gartner uses this set of observations to advise companies about risk.  But we can use it to identify a term’s likelihood to induce buzzword fatigue and produce derisive satire.

Let’s get specific.  Do you remember a few years back, when “X as a service” really took off?  The world seized on the promise of the cloud.  Don’t maintain it yourself — have a service do it!  As the term rocketed up the peak of inflated expectations, everyone wanted a part of the cloud.

But then it fell into the trough of disillusionment, and satire ensued.  Twitter accounts offered “sarcasm as a service” to poke fun at the hype.  If you saw an offering for “everything as a service,” you had no idea whether it was serious.

Since this time, however, these offerings have ascended the so-called “slope of enlightenment” and established themselves as mainstream.  Actually, let me correct that.  They have established themselves as foundational to the modern internet.

Let’s now unpack this X as a service concept a bit.  In order to do that, I’ll offer a story in contrasts.

The “As a Service” Concept

Imagine that I own a small business.  In this capacity, I want to keep track of prospects, leads, and customers for sales purposes.  You can think of this as “customer relationship management” (CRM) software.

Back in the early days of my career (late 90s, early 2000s), you might have done this with Excel.  At least, you would have used Excel until it became too unwieldy.  Then, you’d have gone to Best Buy and purchased software that you installed from a CD.  Finally, you’d have installed the “client” on anyone’s PC who needed to use it while installing the “server” on some jack of all trades machine running Windows 2000 or something.  From there, using it was as easy as making sure not too many people tried to change things at once.

Fast forward a couple of decades and that seems… odd.  These days, you’d probably just navigate to something like salesforce.com and create a trial account.  Certainly small business owners would take this approach.  Larger organizations with more privacy concerns might still setup servers and install their own software.  But even the ones doing this would probably host a web app and have “clients” access it via browser.

This tale drives at the essence of “as a service.”  Stuffed into that small phrase, you find the large, important concept of “let someone else worry about it.”  You shouldn’t need to think about clients, servers, networks, and the like to have a CRM system.  Let someone else worry about it.  You just want to sign in via the browser.

This concept has become so important and so ubiquitous that it drives today’s internet.  But not all cloud, “as a service” concepts are created equal.  Let’s take a look at the major types of cloud computing, by conceptual level.

Read More