DaedTech

Stories about Software

By

Years of Lessons Learned from Home Automation

I’ve had three variations of my home automation setup. The first incarnation was a series of Linux command line utilities and cron jobs. There was some vague intention of a GUI, but that never really materialized. The second was a very enterprise-y J2EE implementation that involved Spring, MongoDB, layered architecture, and general wrangling with java. The current and most recent reboot involves nodes in a nod to Service Oriented Architecture and the “Internet of Things.” As I mentioned in my last post, the I’m turning a Raspberry Pi into a home automation controlling REST endpoint and then letting access and interaction happen in more distributed, ad-hoc fashion.

The flow of these applications seems to reflect the trajectory of my career from entry level developer to architect — from novice and hobbyist to experienced professional and, well, still hobbyist. And I think that, to an extent, they also reflect the times and trends in technology. It’s interesting to reflect on it.

When I started out as a programmer in the working world, I was doing a lot in the Linux world with C and C++. In that world, there was no cred to writing any kind of GUI — it was all about being close to the metal, and making things work behind the scenes. GUIs were for the faint of heart. I wrote drivers and kernel space code and automated various interactions between hardware and software. This mentality was carried over into the world of hobby when I discovered home automation. X10 was the province of hobbyist electrical engineers who wrote code out of necessity, and I fell in nicely with this approach. It was all about banging away, hacking, and making things work. Architecture, planning, testing, deployment strategies, etc… who cares? Making it work was all that mattered. I was a beginner.

As my career wound on, I started doing more and different kinds of programming. I found my way into web development with Java, did things in the .NET space, worked with databases, and started to become interested in architecture, software processes and honing my craft. With my newfound knowledge of a breadth of technologies and better software development approaches, I decided on a home automation reboot. I chose Linux and Java to keep the budget as shoe-string as possible. For a server, I could use the machine I took with me to college — a 400 MHz P2 processor and 384 meg of RAM. The hardware, OS, and software were thus all free, and all I had to do was pop for the X10 modules at $10-$20 a piece. Not too shabby.

I was cost conscious, and I had a technical vision for the architecture. I knew that if I created a web application on the server that what I did would be accessible from anywhere: Windows computers, Linux computers, even cell phones (which were a lot more limited as nodes in a network 5-6 years ago when I started laying this out). Java was a good choice because it gave me a framework to integrate all of the different functionality that I could imagine. And I imagined plenty of it.

There was no shortage of gold plating. Part of this was because I was interested in learning new technologies as long as I was doing hobby work and part of this was because I hadn’t yet learned the value of limiting myself to the minimum set of features needed to get going. I had advanced technically enough to see the value in architecture and having a plan for how I’d handle future added features, but I hadn’t advanced enough to keep the system flexible without putting more in up front than I needed. A web page with a link for turning a lamp on may not need data access, domain, service, and presentation layers. And, while I had grand plans to integrate things like home inventory management, recipe tracking, a family calendar and more, those never actually materialized due to how busy I tend to be. But I was practicing my craft and teaching myself these concepts by exploring them, so I don’t look back ruefully. Lesson learned.

Now, I’m rebooting. My old P2 machine is dying slowly but surely, and I recently purchased a lake house where I want to replicate my setup. I don’t have another ancient machine, and it’s time to get more repeatable anyway. A minimal REST endpoint on a Raspberry Pi is cheap and repeatable, and it lets me build the system in my house(s) more incrementally and flexibly. If I want to use WPF to build a desktop app for controlling the thing, then great. If I want to use PHP or Java on a server, then also great. ASP MVC, whatever. Anything that can speak REST will work, and everything speaks REST.

Maybe in another three years, I’ll do the fourth reboot and think of how silly I was “back then” in 2013. But for now, I’ll take the lessons that I’ve learned in my reboots and reflect. I’ve learned that software is about solving problems for people, not just for the sake of solving the problem. A cron job that I can tweak turns my lights on and off, but it doesn’t present any way that the system isn’t weird and confusing for my non-technical girlfriend. I’ve learned that building more than what you need right now is a guarantee that you’ll have more complexity than you need and less benefit. I’ve learned that a system composed of isolated, modular components is better than a monolithic juggernaut that can handle everything. And, most importantly, I’ve learned that you’ve never really got it all figured out; whatever grand plan you have right now is going to need constant care and refinement.

By

RESTful Home Automation

Here are the general steps to get a REST service going on your Raspberry Pi, using Python. One thing that I’ve learned from blogging over the last several years is that extremely detailed, granular how-tos tend to be the most yawned-at posts. So this is just a quick overview of how you can accomplish the goal without going into a lot of detail. If you want that detail, you can drill into the links I’m providing or else feel free to ask questions in comments or via email/twitter.

  1. Go out and buy these things: USB transceiver, plugin transceiver, lamp module (optional)
  2. On your Pi, install apache with sudo apt-get install apache2.  This is the web server.
  3. Also on your Pi, install web2py with sudo apt-get install python-webpy.  This is the module that makes setting up a REST service a snap.
  4. Install some driver dependencies (I will probably later roll these into what I’m doing) with “sudo apt-get install libusb-1.0 python-usb”.  Here are more detailed instructions from the page of the home automation python driver that I’m using.
  5. Follow the instructions on that page for disabling interfering kernel drivers.
  6. Give your user space account permissions to hit the USB device from the referenced instruction page, but note a typo where he says “sudo nano /etc/udevrules.d/cm19a.rules” and you really want
    “sudo nano /etc/udev/rules.d/cm19a.rules”.
  7. Now go get my stuff from github and run a web server using python rest.py <port>

That’s all there is to it.  Right now, at the time of writing, you would go to http://<your pi’s ip>:<port>/office/on to basically turn everything from A on.  A and office are both hard-coded, but that’s going to change in the next few days as I grow this service to support adding rooms and lights via PUT, and storing them as JSON documents on the server.  You’ll be able to add a light with PUT, supplying the room, light, and X10 code, and then you’ll subsequently be able to toggle it with http://pi:port/room/light/{on/off}

You can also just install Andrew’s driver and use it as-is.  It even has a web mode that supports query parameters.  The reason I didn’t do things that way is because (1) I wanted a REST service and (2) I wanted to be able to customize everything as I went while learning a new language.

By

The Hidden Cost of Micromanagement

I think everyone has encountered the micromanaging type at some point or another in their career. Most obviously or memorably, this might have been a boss who took the attitude, “no decision too small!” But it might also have been a coworker’s boss or just someone else at the company you had the misfortune to share an office with. If you’ve managed to avoid this your whole career, count yourself fortunate, but understand that there is a type of person out there that feels as though he is the only one capable of making decisions and then acts accordingly.

The up-front cost of this behavior is fairly obvious. It clearly doesn’t scale well, so micromanagers wind up being huge bottlenecks to productivity with all decisions essentially on hold until that person gets to them. Micromanaging also the effect of creating learned helplessness in those around them, rendering everyone else less productive. People learn that there’s no point in trying to have autonomy, so they check out and stop paying attention. Some will even engage in passive aggressive behaviors that harm the organization, such as malicious compliance. “You want me to ship this code as-is, huh? Okie-dokie.” (“Heh, heh, well, it crashes, but hey, you’re the boss!”)

Being subjected to a micromanager is hard to take and often stressful, so turnover is typically high in such scenarios. And that’s where a hidden cost comes in. People working under the purview of a micromanager don’t suddenly and automatically switch gears when their situation improves — they often bring their previous behaviors along for the ride. If you’ve ever seen the movie The Shawshank Redemption, recall the scene with Red in the grocery store when he gets paroled to work after 40 years in prison:

Red: Restroom break?
Boss: You don´t need to ask me every time you need to go take a piss. Just go.
Red (thinking to himself): Forty years I´ve been asking permission to piss. I can´t squeeze a drop without say-so.

Red’s boss and the grocery store are experiencing what I’m describing as the hidden cost of micromanagement. He has an employee so used to having his day micromanaged that the employee is maddeningly, annoyingly dependent. In your own travels, it’s easy to tell the signs of a previous victim of micromanagement: endless updates about details and inconsequential minutiae, constantly asking permission to do anything, non-stop preemptive apologies and general insecurity about work quality. That all might seem harmless, if a little sad, but the problem is that this lack of autonomy and willingness to take charge of situations has a real productivity hit for both parties. The new manager/lead should be focusing on bigger picture issues, such as removing impediments from the path of her team — she shouldn’t have to worry about whether one of her team members’ Outlook inbox is 80% full. That’s a meaningless detail that nobody should report to anyone, and yet tons and tons of micromanagement victims do.

It goes without saying that you should avoid micromanagement. It’s a terrible interaction and leadership strategy on its face that is largely the purview of people with psychological problems. But look for past victims of micromanagement and given them a hand or a leg up. Encourage them. Work with them. And most of all, let them know that they’re capable of making good decisions and worthy of being trusted to do so. Your organization will be much better off for it.

By

Lessons in Good Naming through Absurdity

There’s something I’ve found myself stressing a lot lately to people on my team. It’s a thing that probably seems to most like nitpicking, but I think is one of the most, if not the most, under-stressed aspects of programming. I’m talking about picking good names for things.

I’ve seen that people often give methods, variables and classes abbreviated names. This has roots in times where saving characters actually mattered. Methods in C had names like strcat (concatenate a string), and that seems to have carried forward to modern languages with modern resource situations. The reader of the method is left to try to piece together what the abbreviation means, like the recipient of a text message from someone who thinks that teenager text-speak is cute.

There are other naming issues that occur as well. Ambiguity is a common one, where methods have names like “OnClick” or even “DoStuff.” You’ll also have methods that are occasionally misleading — a method called “ReadRecords” that reads in some records and then actually updates them as well. Giving this a simple name like “ReadAndUpdateRecords” would take care of this, but people don’t do it. There are other examples as well.

All of this probably seems like nitpicking, as I said a moment ago, but I contend that it isn’t. Code is read way, way more often than it is written/modified, and it’s usually read by people who don’t really understand what was going through the coder’s mind at the time of writing. (This can even include the original coder, revisiting his own code weeks or months later.) Anything that furthers understanding becomes important for saving minutes or even hours spent trying to understand. Methods with names that accurately advertise what they do save the maintenance programmer from needing to examine the implementation to see how it works. When there is a standard like this through the entirety of the code base, the amount of time saved by not having to study implementations is huge.

Toward the end of achieving this goal, one idea I had was a “naming” audit. This activity would consist of the team assembling for an hour or so, perhaps over pizza one lunch or evening, and going through the code base looking at all of the names of methods, variables, and classes. Any names that weren’t accurate or sufficiently descriptive would be changed by the group to something that was clear to all. I think the ROI on this approach would be surprisingly high.

But if you can’t do that, maybe a more distributed approach would work — one that combines the best elements of shaming and good-natured ribbing, like having the person who breaks the build buy lunch. So maybe any time you encounter a poorly named method in the code, you rename it to something ridiculous to underscore the naming problem that preceded it. You bring this to the method author’s attention and demand a better name. Imagine seeing your methods renamed to things like:

  • ShockAndAwe()
  • Blammo(int x)
  • Beat(int deadHorse)
  • Onoes(string z)
  • UseTheForceLuke()

I’m not entirely sure if I’m serious about this or not, but it would make for an interesting experiment. Absurd names kind of underscores the “dude, what is this supposed to mean” point that I’m making, and it’s a strategy other than endless nagging, which isn’t really my style. I’m not sure if I’ll try this out, but please feel free to do so yourself and, if you do, let me know how it goes.

By

The Joy of Adding Code to Github

These days, a good portion of my time is spent on overhead tasks: management tasks, strategy meetings, this sort of thing. In my role, I also am responsible for the broad architectural decisions. But what I don’t get to do nearly as often between 9 and 5 these days is write code. All of the code that I write is prototype code. I’m sure that I’m not unique in experiencing drift away from the keyboard at times in my career — some of you do too as well. As you move into senior or lead type roles and spend time doing things like code reviews, mentoring, etc, it happens. And if you really love programming — making a thing — you’ll feel a loss come with this.

Of course, feeling this loss doesn’t have to be the result of having more overhead or leadership responsibilities. It can happen if you find yourself working on a project with lots of nasty legacy code, where changes are agonizingly slow and error prone. It can happen in a shop with such intense and byzantine waterfall processes that most of your time is spent writing weird documents and ‘planning’ for coding. It can happen in an environment dominated by loudmouths and Expert Beginners where you’re forced by peer pressure or explicit order to do stupid things. It can happen when you’ve simply lost belief in the project on which you’re working or the organization for which you’re working. And it almost certainly will happen to you at some point in your career.

Some call it “burnout,” some call it “boredom” and some call it “losing your way.” It has a lot of names. All of them mean that you’re losing connection with the thing you’ve liked: programming, or making a thing out of nothing. And you need that. It’s your livelihood and, perhaps more importantly, your happiness. But, you can get it back, though, and it isn’t that hard.

Lightbulb

Recently, I decided to order a few more home automation pieces (blog to follow at some point) and reboot the design of my home automation server to make use of a Raspberry Pi and a REST service in Python. I don’t know the first thing about Python, but after the parts arrived, and I spent a few hours reading, poking around, failing, and configuring, I was using Fiddler to turn all the lights off downstairs (code on github now — work in progress).

There is nothing quite like the feeling of creating that new repository. It’s daunting; I’m about to start learning a new programming language and my efforts in it will most certainly be daily-WTF-worthy until I learn enough to be passingly idiomatic in that language. It’s frustrating; it took me 15 minutes to figure out how to reference another code file. It’s tiring; between a day job taking 50+ hours per week and the work I do with my blog and Pluralsight, I’m usually coding at midnight on school nights. But forget all that because it’s exhilarating ; there’s nothing like that feeling of embarking on a journey to build a new thing. The code is clean because you haven’t had a chance to hose it up yet. There is no one else to tell you how, what, or when because it’s entirely your vision. The pressure is off because there are no users, customers, or deadlines yet. It’s just you, building something, limited only by your own imagination.

And what a powerful feeling. If you’ve got a case of the professional blues, go out and grab this feeling. Go dream up some new project to start and chase after it. Recapture the enjoyment and the excitement of the desire to invent and create that probably got you into this stuff in the first place.

By

Asking Questions That Change The World (Or At Least Your Group)

I recently asked a semi-rhetorical question on Twitter about health insurance in the USA. Specifically, it seems deeply weird to me that health insurance is tied in with employment. I mean, your employer doesn’t subsidize your homeowner’s, auto, or renter’s insurance, so why health insurance? Someone answered that this was an end-run around salary caps and restrictions that just kind of stuck around. This rang a bell, and I looked it up. Here’s an explanation that details how caps on wages during WWII were circumvented by offering this perk and making it tax deductible, and so a long, nonsensical tradition was born, established, and worked into our culture to a degree where everyone thinks, “that’s just the way things work.”

Many people respond to hearing questions of “why do we do this, anyway,” with something like, “hey, yeah, that’s a good question!” Once it’s pointed out to them, they recognize that perhaps an entrenched practice is worth questioning. Others balk at the notion and prefer doing things that are traditional, well, just because we’ve always done it that way. There seems to be something about human nature that finds ritual deeply comforting even when the original reasoning behind it has long expired. White dresses on wedding days, “God bless you” after sneezes, using signatures to indicate official permission, and many more are things that we simply do because it’s what we know, and if someone asked you “why,” you’d probably say, “huh, I don’t know.”

In this manner, software engineering resembles life. Within a group, things that originally had some purpose, reasonable or misguided, eventually become part of some unquestioned routine. I’ve seen shops where everyone was forced to use the same diff tool, where try-catch blocks were required in every single method, where every class had to implement IDisposable, and more, for reasons no one could remember. Obviously, this isn’t good. In life, tradition probably has an anthropologically stabilizing role about which I won’t speculate here, but in a software group, there’s really no upside.

Accordingly, I don’t want to team up with people that blindly follow cargo cult processes. It’s bad for the team. But who do I want on a team? It isn’t just people that are willing to forgo routines and rituals when they’re called into question and evaluated. I want people that think to do the questioning in the first place.

Don’t get me wrong. I’m not looking for iconoclasts that question everything whether or not there’s reason to question it or that rail against everything that others want to do. I’m looking for people that take no assumptions on faith and are constantly using data and objective metrics to reevaluate the validity of everything that they’re doing, even when those things are regarded as “no-brainers.” I want people that get creative when solving problems, expanding their thinking beyond obvious approaches and into the realm of saying “what if we could do it without doing this thing that we ‘have’ to do?”

It’s this kind of thinking that gave rise to NoSQL; what if a relational database weren’t required for every application? It’s this kind of thinking that turned the internet from a way to view documents into an application medium; what if there were applications that didn’t require CDs and installers? It’s this kind of thinking that changes the world, in software and in life. I want people on my team that wonder why their employer pays for their insurance, anyway.

By

Practical Math for Programmers: O Notation

It’s been a while since I’ve done one of these, and I didn’t want to let the series die, so here goes.

The “Why This Should Matter to You” Story

You’ve probably heard people you work with say thing like, “this is going to run in O of N squared, but mine is going to run in O of N, so we should use mine.” You understand that they’re talking about performance and, more specifically time, and that whatever N is, squaring it means more, and more is bad when it comes to time. From a realpolitik perspective, you’re aware that this O of N thing is something that lends programmers some cred if they know how to talk about it and from a career perspective, you understand that the Googles and Microsofts of the world seem interested in it.

But when your coworker talks about using his “O of N” solution, you don’t actually know what that means. And because of this, you tend to lose arguments. But beyond that, you’re also losing out on the ability to understand and reason about your code in a way that will let you see more pieces of the puzzle when weighing design tradeoffs. You could have a more intuitive understanding of how your code will perform in real world circumstances than you currently do.

Math Background

If you look at this method, do you know how long it takes to run? My guess is that you’d correctly and astutely say, “it depends on what you pass in for n.” In this case, assuming that the method called inside the loop does what’s advertised, the method will take n seconds: pass in 10 and it takes 10 seconds, but pass in 100 and it takes 100 seconds. The runtime varies in linearly with n. This said to run in O(N).

Let’s look at another example.

This one may be a little less intuitive, but you’ll definitely notice it’s going to be slower. How much? Well, it varies. If you pass in 2 for n, the outer loop will execute twice, and each time it does, the inner loop will execute twice, for a total of 2*2 = 4 executions. If you pass in 10, the outer loop will execute 10 times, with the inner loop executing ten times for each of those, so the total becomes 10*10 = 100 executions of the method. For each n, the value is n^2, so this is said to be O(n^2) runtime.

Let’s take a look at yet another example.

In this (poorly written because of the unused parameter) method, it doesn’t matter what you pass in for n. The algorithm will always execute in two seconds. So, this means that it’s O(2), right? Well, no, actually. As it turns out, a constant time algorithm is denoted in O notation by O(1). The reason for this is mainly convention, but it underscores an important point. A “constant time” algorithm may not be constant, per se, and it may not be one or any other specific number, but a “constant time” operation is one that is executed in an amount of time that is bounded by something independent of the problem at hand.

To understand what I mean here, consider the first two examples. Each of those runtimes depended on the input “n”. The third example’s runtime depends on something (the method called, for instance), but not n. Simplifying the runtime isn’t unique to constant time. Consider the following:

This code is identical to the second example, except that instead of innerIndex varying up to n, it varies only up to index. This is going to run in half the time that example two ran in, but we don’t say that it’s O(n^2/2). We simply say this is also O(n^2). Generally speaking, we’re only interested in the factor of n and not any coefficients of n. So we don’t talk about O(2n), O(n/12) or anything else, just n. Same goes for when n is squared or cubed, and the same thing goes for constant time — there’s no O(1/2) or O(26), just O(1).

One last thing to note about O notation (or Big O Notation, more formally) is that it constitutes a complexity upper bound. So, for example, if I stated that all of the examples were O(n^3), I’d technically be accurate, since all of them will run in O(n^3) or better. However, if you cite a higher order of complexity when discussing with others and then pedantically point out that you’re technically right, they’ll most likely not grasp what you’re talking about (and, if they do, they’ll probably want to stop talking to you quickly).

How It Helps You

In the same way that knowing about design patterns in software help you quickly understand a complex technique, O notation helps you understand and discuss how complex and resource intensive a solution is a function of its inputs. You can use this form of notation to talk about both algorithm runtime as well as memory consumed, and it gives you concrete methods of comparison, even in cases where you don’t know the size of the input.

For instance, if you’re doing some test runs on small set of data before pushing live to a web server and you have no way of simulating the high volume of users that it will have in the wild, this comes in very handy. Simple time trials of what you’re doing aren’t going to cut it because they’re going to be on way too small a scale. You’re going to have to reason about how your algorithm will behave, and O notation helps you do just that. On a small sample size, O(n^3) may be fine, but in production, it may grind your site to a halt. Better to know that going in.

O notation can also help you avoid writing terrible algorithms or solutions that perform wildly worse than others. For instance, consider the case of Fibonacci numbers. These are a sequence of numbers where Nth number is the last two numbers in the sequence added together: Fn = Fn-1 + Fn-2. The sequence itself is thus: 1, 1, 2, 3, 5, 8, 13, 21, etc.

Here is an elegant-seeming and crystal clear implementation of Fibonacci:

It certainly demonstrates recursion and it’s very understandable. But it also turns out to run in O(1.6^n), which is exponential runtime. We haven’t covered that one yet, but exponential runtime is catastrophic in your algorithms. Ask this thing for the 1000th Fibonacci number and come back at the end of time when it’s done completing your request. As it turns out, there is an iterative solution that runs in linear, O(n) time. You probably want that one.

This is the value of understanding O notation. Sure, without it you get that not all implementations are equally well performing and you may understand what makes some better than others. But understanding O notation gives you an easy way to remember and keep track of different approaches and how efficient they are and to communicate this information with others.

Further Reading

  1. Wikipedia
  2. A nice article for beginners
  3. Thorough explanation from stack overflow ‘wiki’
  4. Academic but approachable explanation from MIT

By

My Initial Postsharp Setup — Logging at Assembly Granularity

PostSharp and Log4Net

I tweeted a bit about this, and now I’m going to post about my initial experience with it. This comes from the perspective of someone new at this particular setup but familiar with the concepts in general. So, what are PostSharp and log4net?

First up, log4net. This is extremely easy to understand, as it’s a logging framework for .NET. It was ported to .NET from Java and the product log4j a long time ago. It’s a tried and true way to instrument your applications for logging. I’m not going to go into a lot of detail on it (here’s a Pluralsight course by Jim Christopher), but you can install it for your project with Nuget and get started pretty quickly and easily.

PostSharp is a little more complex to explain (there’s also a Pluralsight course on this, by Donald Belcham). It’s a tool for a technique called “aspect-oriented programming” (AOP) which addresses what are known as cross cutting concerns. These are things that are intrinsically non-localized in an application. What I mean is, you might have a module that processes EDI feeds and another one that stores data to a local file, and these modules may be completely isolated from one another in your system. These concerns are localized in a nicely modular architecture. Something like, oh, I dunno, logging, is not. You do that everywhere. Logging is said to be an aspect of your system. Security is another stock example of an aspect.

PostSharp employs a technique called “IL Weaving” to address AOP in clean and remarkably decoupled way. If you’re a .NET programmer, whether you code in VB, C#, F#, etc., all of your code gets compiled down to what’s known as intermediate language (IL). Then, when the code is actually being executed, this IL is translated on the fly into machine/executable code. So there are two stages of compiling, in essence. In theory, you can write IL code directly. PostSharp takes advantage of this fact, and when you’re building your C# code into IL code, it interposes and injects a bit of its own stuff into the resultant IL. The upshot of all this is that you can have logging in every method in your code base without writing a single call to Logger.Log(something) in any method, anywhere. Let me be clear — you can get all of the benefits of comprehensive logging with none of the boilerplate, clutter, and intensely high coupling that typically comes with implementing an aspect.

Great, But How?

Due to a lack of time in general, I’ve sort of gotten away from detailed how-to posts, for the most part, with screenshots and steps. It’s really time consuming to make posts like that. What I’ll do instead is describe the process and, if anyone has questions, perhaps clarify with an addendum or links or something. Trying to get more agile everywhere and avoid gold-plating :)

And really, getting these things into your project is quite simple. In both cases, I just added a nuget package to a project. For log4net, this is trivial to do. For PostSharp, this actually triggers an install of PostSharp as a Visual Studio plugin. PostSharp offers a few different license types. When you install it in VS, it will prompt you to enter a license key or do a 45 day trial. You can sign up for an express version on their site, and you’ll get a license key that you can plug in. From there, it gets installed, and it’s actually really polished. It even gives you a window in Studio that keeps track of progress in some tutorials they offer for getting started.

With that in place, you’re ready to write your first aspect. These are generally implemented as attributes that you can use to decorate methods, types, and assemblies so that you can be as granular with the aspects as you like. If you implement an attribute that inherits from OnMethodBoundaryAspect, you get a hook in to having code executed on events in the application like “Method Enter,” “Method Leave,” and “Exception.” So you can write C# code that will get executed upon entry to every method.

Here’s a look at an example with some method details elided:

Leaving aside the logging implementation details, what I’ve done here is define an attribute. Any type or method decorated with this attribute will automatically log any exception that occurred without the code of that method being altered in the slightest. The “MethodExecutionArgs” parameter gives you information that lets you inspect various relevant details about the method in question: its name, its parameters, its return value, etc.

Getting Modular

Okay, so great. We can apply this at various levels. I decided that I wanted to apply it per assembly. I’m currently working at times in a legacy code base where a series of Winforms and Webforms applications make use of a common assembly called “Library.” This code had previously been duplicated, but I made it common and unified it as a step toward architecture improvement. This is where I put my aspect attribute for reference, and I decided to apply this at the assembly level. Initially, I want some assemblies logging exceptions, but not others. To achieve this, I put the following in the AssemblyInfo.cs in the assemblies for which I wanted logging.

This is awesome because even though PostSharp and the Aspect are heavily coupled to the assemblies on the whole (every assembly uses Library, and Library depends on Postsharp, so every assembly depends on PostSharp) it isn’t coupled in the actual code. In fact, I could just remove that line of code and the library dependency, and not touch a single other thing (except, of course, the references to library utilities).

But now another interesting problem arises, which is naming the log files generated. I want them to go in AppData, but I want them named after the respective deliverable in this code base.

And then, in the library project, I have this method inside of the LogAttribute class:

I’ve made use of the Monostate Pattern to ensure that a single logger instance is configured and initialized and then used by the attribute instances. This is an implementation that I’ll probably refine over time, but it’s alright for a skunkworks. So, what happens is that when the application fires up, I figure out the name of the entry executable and use it to name the log file that’s created/appended in AppData under the company name folder.

This was great until I noticed weird files getting created in that folder. Turns out that NCrunch and other plugins are triggering the code to be invoked in this way, meaning that unit test runners, realtime and on-demand are generating logging. Duh. Oops. And… yikes!

My first thought was that I’d see if I was being run from a unit test and no-op out of logging if that were the case. I found this stack overflow post where Jon Skeet suggested an approach and mentioned that he “[held his] nose” while doing it because it was a pragmatic solution to his problem. Well, since I wasn’t in a pinch, I decided against that.

Maybe it would make sense, instead of figuring out whether I was in a unit test assembly and what other sorts of things I didn’t want to have the logging turned on for, to take a whitelist approach. That way, I have to turn logging on explicitly if I want it to happen. I liked that, but it seemed a little clunky. I thought about what I’d do to enable it on another one of the projects in the solution, and that would be to go into the assembly file and add the attribute for the assembly, and then go into the logger to add the assembly to the whitelist. But why do two steps when I could do one?

I added this method that actually figures out whether the attribute has been declared for the assembly and, I only enable the logger if it has. I’ve tested this out and it works pretty well, though I’ve only been living with it for a couple of days, so it’s likely to continue evolving. But the spurious log files are gone, and MS Test runner no longer randomly bombs out because the “friendly name” sometimes has a colon in it. This is almost certainly not the most elegant approach to my situation, but it’s iteratively more elegant, and that’s really I’m ever going for.

Ideas/suggestions/shared experience is welcome. And here’s the code for the aspect in its entirety right now:

By

The Value of Failure

Over the course of time leading people and teams, I’ve learned various lessons. I’ve learned that leading by example is more powerful than leading by other attempts at motivation. I’ve learned that trust is important and that deferring to the expertise of others goes a lot further than pretending that you’re some kind of all-knowing guru. I’ve learned that listening to people and valuing their contributions is vital to keeping morale up, which, in turn, is vital to success. But probably the most important thing that I’ve learned is that you have to let people fail.

My reasoning here isn’t the standard “you learn a lot by failing” notion that you probably hear a lot. In fact, I’m not really sure that I buy this. I think you tend to learn better by doing things correctly and having them “click” than by learning what not to do. After all, there is an infinite number of ways to screw something up, whereas precious few paths lead to success. The real benefit of failure is that you often discover that your misguided attempt to solve one problem solves another problem or that your digression into a blind alley exposes you to new things you wouldn’t otherwise have seen.

If you run a team and penalize failure, the team will optimize for caution. They’ll learn to double and triple check their work, not because the situation calls for it but because you, as a leader, make them paranoid. If you’re performing a high risk deployment of some kind, then double and triple checking is certainly necessary, but in most situations, this level of paranoia is counter-productive in the same way it is to indulge an OCD tendency to check three times to see if you locked your front door. You don’t want your team paralyzed this way.

A paranoid team is a team with low morale and often a stifled sense of enjoying what it does. Programming ceases to be an opportunity to explore ideas and solve brain teasers and becomes a high-pressure gauntlet instead. Productivity decreases predictably because of second-guessing and pointless double checking of work, but it’s also adversely affected by the lack of cross-pollination of ideas resulting from the aforementioned blind alleys and misses. Developers in a high pressure shop don’t tend to be the ones happily solving problems in the shower, stumbling across interesting new techniques and having unexpected eureka moments. And those types of things are invaluable in a profession underpinned by creativity.

So let your team fail. Let them flail at things and miss and figure them out. Let them check in some bad code and live with the consequences during a sprint. Heck, let it go to production for a while, as long as it’s just technical debt and not a detriment to the customer. Set up walled gardens in which they can fail and be relatively shielded from adverse consequences but are forced to live with their decisions and be the ones to correct them. It’s easy to harp on about the evils of code duplication, but learning how enormously tedious it is to track down a bug pasted in 45 different places in your code base provides the experience that code reuse reduces pain. Out of the blind alley of writing WET code, developers discover the value of DRY.

The walled garden aspect is important. If you just let them do anything at all, that’s called chaos, and you’re setting them up to fail. You have to provide some ground rules that stave off disaster and then within those boundaries you have to discipline yourself to keep your hands off the petri dish in order to see what grows. It may involve some short term ickiness and it might be difficult to do, but the rewards in the end are worth it — a happy, productive, and self-sufficient team.

By

Kill Tech Patents with Fire And Do It Now

I’ve actually had a few spare hours lately to get ahead on blogging, so I was just planning to push a post for tomorrow, read a little and go to sleep. But then I saw an article that made me get a fresh cup of water, turn my office lamp on, and start writing this post that I’m going to push out instead. There probably won’t be any editing or illustration by the time you read this, and it might be a little rant-ish, so be forewarned.

Tonight, I read through this article on Ars Technica with the headline “Patent War Goes Nuclear.” I think the worst part about reading this for me was that my reaction wasn’t outrage, worry, disgust or really much of anything except, “yep, that makes sense.” But I’ll get back to my reaction in a bit. Let me digress here for a moment to talk about irony.

Irony is a subject about which there is so much debate that the definition has been fractured and categorized into more buckets of meaning than I can even count off the top of my head. There is literary irony, dramatic irony, verbal irony and probably more. There are various categories of era-realated irony, such as Classical (Greek) irony, Romantic irony, and, most recently, whatever hipsters are and whatever they do. With all of these different kinds of ironies, the only thing that the world can seem to agree on is that things in the Alanis Morissette song about “ray-e-ay-ain on your wedding day” are not actually ironic.

The problem for poor Alanis, now the object of absurd degrees of international nitpicking derision, is that there is no ultimate reversal of expectation in all of the various ‘ironic’ things that happen in her song. Things are generally considered to be ironic when there is a gap between stated expectations or purpose and outcome. When it rains on your wedding day, that just sucks — it’s not ironic. It rains a good number of days of the year, so no reasonable person would expect that it couldn’t rain on a given day. What would most likely be considered ironic is if you opted to have your wedding inside to avoid the possibility of getting wet, and a large supply line pipe burst in the floor above you during the wedding, drenching everyone in attendance.

Another pretty clear cut example of irony is the US Patent System as it exists today when compared with common perception as to the original and ongoing purpose of such an institution. There’s a rather fascinating and somewhat compelling argument that claims the concept of intellectual property (and specifically patents) were instrumental in creating the Industrial Revolution. In other words, there was historically little motivation for serf and merchant classes to innovate and optimize their work since the upper classes with the means of production would simply have stolen the ideas and leveraged better economies of scale and resources to reap the benefits for themselves. But along came patents and the “democratization of invention” to put a stop to all that and to enable the Horatio Algiers (or perhaps Thomas Edisons) of the world to have a good idea, march on down to the patent office, and make sure that they would be treated fairly when it came to reaping the material benefits of their own ideas.

On the other side of the coin, I’ve read arguments that offer refutations of this working hypothesis, and I’m not endorsing one side or the other, because it really doesn’t matter for my purposes here. Whether the “democratization of invention” was truly the catalyst for our modern technological age or not, the perception remains that the patent system exists to ensure that the little guy is protected and that barriers to entry are removed to create truly free markets that reward innovation. If you have the next great idea, you go find a lawyer to help you draft a patent and that’s how you make sure you’re protected from unfair treatment at the hands of evil corporate profiteers.

So where’s the irony? I’ll get to that in a minute, but first another brief digression. I want to talk now about the concept of a “defensive patent,” at least as I’ve experienced the concept. Many moons ago, I maintained a database application to manage intellectual property for a company that made manufacturing equipment. At this company, there was a fairly standard approach to patenting, which was “mention everything you’re working on to the Intellectual Property team who will see if perhaps there’s anything we can claim patents on — and we mean everything.” The next logical question was “what if it’s already obvious or unrelated to what we’re trying to do,” to which the response of “what part of everything wasn’t clear?” The reason for this was that the goal wasn’t to patent things so that the company could make sure that nobody took its ideas but rather to build up a war-chest of stockpiled patents. A patent on something not intended for use was perfectly fine because you could trade with a competitor that was trying to use a patent to extort you. Perhaps you could buy and sell these things like securities packages in a portfolio. And, to be perfectly honest, my company was pretty reputable and honest. They were just trying to avoid getting burned — don’t hate the player, hate the game. “Defensive” patents had nothing to do with protecting innovation and everything to do with leverage for endless series of lawyer-enriching, negative sum games played out in court.

As I said, that was some years ago, and in the time that’s elapsed since, this paradigm seems to have progressed to the logical conclusion that I pictured back then (or perhaps I just wasn’t aware of it as much back then). Patents had started as legal protection, evolved to become commodities and have now reached the point of being corporate currency, devoid of any intrinsic meaning or value. In the article that I cited, a major tech company (Nortel) went bankrupt and its competitors swooped in like buzzards to loot its corpse. For those of you who played the Diablo series of games, this reminds me of when a dead player would “pop” and everyone else in the game would scramble to pillage his equipment. Or perhaps a better metaphor would be that a nuclear power had fallen into civil war and revolution and neighboring countries quietly stepped in to spirit away its massive arms stockpile, each trying to grab up as much as possible for fear that their neighbors were doing the same and getting ready to use it against them.

Microsoft, Apple, and some other players stepped in to form a shell company and bid against Google for this cache of patents, and Google wound up losing all the marbles to this cartel. Now, fast forward a few years and the cartel has begun shelling Google. How does all of this work exactly? It works because of the evolution of the patent that I mentioned. The patents are protecting nothing because that isn’t what they do, and they have no value as commodities because they’re packaged up into patent “mutual funds” (arsenals) that only matter in large quantities. You don’t get patents in our world to protect something you did, and you don’t get them because they have some kind of natural value the way an ear of corn does — you get them for the sole purpose of amassing them as a means to an end. And, as with any currency, the entities that have the easiest time acquiring more are the ones that already have the most.

So, there is the fundamental irony of the patent system. It’s a system that we conceive of existing to protect the quirky genius in his or her workshop at home from some big, soulless corporation, but it’s a system that in practice makes it easier for the big, soulless corporation to smash the quirky geniuses like bugs or, at best, buy them out and use them as cannon fodder against competitors. The irony lies in the fact that a system we take to be protecting our most valuable asset — our ability to innovate — is actually killing it. The patent system erects massive barrier to entry, rewards unethical behavior, creates a huge drain on the economy and makes bureaucratic process and influence peddling table stakes for success at delivering technological products and services. This is why I had little reaction to a shell company suing Google in a looming patent Armageddon — it just seems like the inevitable outcome of this broken system.

I doubt you’ll find many people that would dispute the notion that our intellectual property system needs serious overhaul. If you google “patent troll” and flip over to news, you’ll find plenty of articles and op-eds in the last month or even the last week. The fact that abuse of the system is so rampant that there’s an endless news cycle about it tells you that there are serious problems. But I think many would prefer to solve these problems by modifying the system we have now until it works. I’m not one of them. I think we’d be better served to completely toss out the system we have now and start over, at least for tech patents (I can see a reasonable case for patents in the field of medicine, for instance). I don’t think it can be salvaged, and I think that I’d answer the question “are you crazy — wouldn’t that result in chaos and anarchy?” with the simple opinion, “it can’t possibly be worse than what we have now.”

In the end, I may be proved wrong, particularly since I doubt torching the tech IP system is what’s going to happen. I hope that I am and I hope that efforts to shut down the trolls and eliminate situations where only IP lawyers win are successful, but until I see it, I’ll remain very skeptical.

/end rant

Back to regularly scheduled techie posts next week. :)

Acknowledgements | Contact | About | Social Media