Stories about Software


Setting Up Spring MVC 3.0

Why Spring MVC?

It’s been a while since I’ve done a lot with Java. I’ve been writing an Android app and see and interact with just enough Java not to forget what it looks like, but for the last couple of years, I’ve mainly worked in .NET with C#. Today, I started on actual development of my home automation server in earnest (will be added to github shortly). One of the main design goals of this home automation effort is to support affordable solutions and, toward that end, I am designing it to run on bare bones Linux machines, thus allowing old computers to be re-appropriated to run it.

This is the driving force in my choice of implementation tools. It needs to be runnable on Linux and Windows, and to have a small footprint. But, it also needs to support a true object oriented design paradigm and rich server side functionality. So, I will be dusting off my J2EE and using Spring MVC and Java for the server itself.

Setting up Spring MVC 3.0

I’ve been spoiled by developing principally in .NET over the last couple of years. In that world, any kind of project is usually a Visual Studio install and a plugin or NuGet package away. In the open source world of Spring and Java, it’s not quite as straightforward. My first step was, of course, a hello world app. I have plenty of Spring MVC/J2EE experience, but I was last developing with Spring when it was version 1.x, and we’re a few years removed and on 3.1, so I’m basically starting all over.

I already had Eclipse and Tomcat installed, and I set about finding an Eclipse plugin for creating a sample spring project or a tutorial on the same. I didn’t really find either. The most helpful thing I found, by far, was this blog post. If you take steps to satisfy the preconditions listed and follow the blog itself, you’ll be most of the way there.

I had to take two additional steps to get my new Spring “Hello World” project up and running. I had to get commons-logging.jar from the spring framework that I had downloaded and put it into my little app’s Web-INF\lib folder. I then had to do the same with jstl.jar from my Tomcat installation. Only after doing that was Hello World up and running.

Hopefully, this saves someone reading some time.



Quick Information/Overview

Pattern Type Behavioral
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Easy – Moderate

Up Front Definitions

  1. Invoker: This object services clients by exposing a method that takes a command as a parameter and invoking the command’s execute
  2. Receiver: This is the object upon which commands are performed – its state is mutated by them

The Problem

Let’s say you get a request from management to write an internal tool. A lot of people throughout the organization deal with XML documents and nobody really likes dealing with them, so you’re tasked with writing an XML document builder. The user will be able to type in node names and pick where they go and whatnot. Let’s also assume (since this post is not about the mechanics of XML) that all XML documents consist of a root node called “Root” and only child nodes of root.

The first request that you get in is the aforementioned adding. So, knowing that you’ll be getting more requests, your first design decision is to create a DocumentBuilder class and have the adding implemented there.

So far, so good. Now, a request comes in that you need to be able to do undo and redo on your add operation. Well, that takes a little doing, but after 10 minutes or so, you’ve cranked out the following:

Not too shabby – things get popped from each stack and added to the other as you undo/redo, and the redo stack gets cleared when you start a new “branch”. So, you’re pretty proud of this implementation and you’re all geared up for the next set of requests. And, here it comes. Now, the builder must be able to print the current document to the console. Hmm… that gets weird, since printing to the console is not really representable by a string in the stacks. The first thing you think of doing is making string.empty represent a print operation, but that doesn’t seem very robust, so you tinker and modify until you have the following:

Yikes, that’s starting to smell a little. But, hey, you extracted a method for the print, and you’re keeping things clean. Besides, you’re fairly proud of your little tuple scheme for recording what kind of operation it was in addition to the node name. And, there’s really no time for 20/20 hindsight because management loves it. You need to implement something that lets you update a node’s name ASAP.

Oh, and by the way, they also want to be able to print the output to a file instead of the console. Oh, and by the by the way, you know what would be just terrific? If you could put something in to switch the position of two nodes in the file. They know it’s a lot to ask right now, but you’re a rock star and they know you can handle it.

So, you buy some Mountain Dew and pull an all nighter. You watch as the undo and redo case statements grow vertically and as your tuple grows horizontally. The tuple now has an op code and an element name like before, but it has a third argument that means the new name for update, and when the op-code is swap, the second and third arguments are the two nodes to swap. It’s ugly (so ugly I’m not even going to code it for the example), but it works.

And, it’s a success! Now, the feature requests really start piling up, and not only are stakeholders using your app, but other programmers have started using your API. There’s really no time to reflect on the design now – you have a ton of new functionality to implement. And, as you do it, the number of methods in your builder will grow as each new feature is added, the size of the case statements in undo and redo will grow with each new feature is added, and the logic for parsing your swiss-army knife tuple is going to get more and more convoluted.

By the time this thing is feature complete, it’s going to take a 45 page developer document to figure out what on Earth is going on. Time to start putting out resumes and jump off this sinking ship.

So, What to Do?

Before discussing what to do, let’s first consider what went wrong. There are two main issues here that have contributed to the code rot. The first and most obvious is the decision to “wing it” with the Tuple solution that is, in effect, a poor man’s type. Instead of a poor man’s type, why not an actual type? The second issue is a bit more subtle, but equally important — violation of the open/closed principle.

To elaborate, consider the original builder that simply added nodes to the XDocument and the subsequent change to implement undo and redo of this operation. By itself, this was fine and cohesive. But, when the requirements started to come in about more operations, this was the time to go in a different design direction. This may not be immediately obvious, but a good question to ask during development is “what happens if I get more requests like this?” When the class had “AddNode”, “Undo” and “Redo”, and the request for “PrintDocument” came in, it was worth noting that you were cobbling onto an existing class. It also would have been reasonable to ask, “what if I’m asked to add more operations?”

Asking this question would have resulted in the up-front realization that each new operation would require another method to be added to the class, and another case statement to be added to two existing methods. This is not a good design — especially if you know more such requests are coming. Having an implementation where new the procedure for accommodating new functionality is “tack another method onto class X” and/or “open method X and add more code” is a recipe for code rot.

So, let’s consider what we could have done when the request for document print functionality. Instead of this tuple thing, let’s create another implementation. What we’re going to do is forget about creating Tuples and forget about the stacks of string, and think in terms of a command object. Now, at the moment, we only have one command object, but we know that we’ve got a requirement that’s going to call for a second one, so let’s make it polymorphic. I’m going to introduce the following interface:

This is what will become the command in the command pattern. Notice that the interface defines two conceptual methods – execution and negation of the execution (which should look a lot like “do” and “undo”), and it’s also going to be given the document upon which to do its dirty work.

Now, let’s take a look at the add implementer:

Pretty straightforward (in fact a little too straightforward – in a real implementation, there should be some error checking about the state of the document). When created, this object is seeded with the name of the node that it’s supposed to create. The document is a setter dependency, and the two operations mutate the XDocument, which is our “receiver” in the command pattern, according to the pattern’s specification.

Let’s have a look at what our new Builder implementation now looks like before adding print document:

Notice that the changes to this class are subtle but interesting. We now have stacks of commands rather than strings (or, later, tuples). Notice that undo and redo now delegate the business of executing the command to the command object, rather than figuring out what kind of operation it is and doing it themselves. This is critical to conforming to the open/closed principle, as we’ll see shortly.

Now that we’ve performed our refactoring, let’s add the print document functionality. This is now going to be accomplished by a new implementation of IDocumentCommand:

Also pretty simple. Let’s now take a look at how we implement this in our “invoker”, the DocumentBuilder:

Lookin’ good! Observe that undo and redo do not change at all. Our invoker now creates a command for each operation, and delegate its work to the receiver on behalf of the client code. As we continue to add more commands, we do not ever have to modify undo and redo.

But, we still don’t have it quite right. The fact that we need to add a new class and a new method each time a new command is added is still a violation of the open/closed principle, even though we’re better off than before. The whole point of what we’re doing here is separating the logic of command execution (and undo/redo and, perhaps later, indicating whether a command can currently be executed or not) from the particulars of the commands themselves. We’re mostly there, but not quite – the invoker, DocumentBuilder is still responsible for enumerating the different commands as methods and creating the actual command objects. The invoker is still too tightly coupled to the mechanics of the commands.

This is not hard to fix – pass the buck! Let’s look at an implementation where the invoker, instead of creating commands in named methods, just demands the commands:

And, there we go. Observe that now, when new commands are to be added, all a maintenance programmer has to do is author a new class. That’s a much better paradigm. Any bugs related to the mechanics of do/undo/redo are completely separate from the commands themselves.

Some might argue that the new invoker/DocumentBuilder lacks expressiveness in its API (having Execute(IDocumentCommand) instead of AddNode(string) and PrintDocument()), but I disagree:

Execute(AddCommand(nodeName)) seems just as expressive to me as AddNode(nodeName), if slightly more verbose. But even if it’s not, the tradeoff is worth it, in my book. You now have the ability to plug new commands in anytime by implementing the interface, and DocumentBuilder conforms to the open/closed principle — it’s only going to change if there is a bug in the way the do/undo/redo logic is found and not when you add new functionality (incidentally, having only one reason to change also makes it conform to the single responsibility principle).

A More Official Explanation

dofactory defines the command pattern this way:

Encapsulate a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations.

The central, defining point of this pattern is the idea that a request or action should be an object. This is an important and not necessarily intuitive realization. The natural tendency would be to implement the kind of ad-hoc logic from the initial implementation, since we tend to think of objects as things like “Car” and “House” rather than concepts like “Add a node to a document”.

But, this different thinking leads to the other part of the description – the ability to parameterize clients with different requests. What this means is that since the commands are stored as objects with state, they can encapsulate their own undo and redo, rather than forcing the invoker to do it. The parameterizing is the idea that the invoker operates on passed in command objects rather than doing specific things in response to named methods.

What is gained here is then the ability to put commands into a stack, queue, list, etc, and operate on them without specifically knowing what it is they do. That is a powerful ability since separating and decoupling responsibilities is often hard to accomplish when designing software.

Other Quick Examples

Here are some other places that the command pattern is used:

  1. The ICommand interface in C#/WPF for button click and other GUI events.
  2. Undo/redo operations in GUI applications (i.e. Ctrl-Z, Ctrl-Y).
  3. Implementing transactional logic for persistence (thus providing atomicity for rolling back)

A Good Fit – When to Use

Look to use the command pattern when there is a common set of “meta-operations” surrounding commands. That is, if you find yourself with requirements along the lines of “revert the last action, whatever that action may have been.” This is an indicator that there are going to be operations on the commands themselves beyond simple execution. In scenarios like this, it makes sense to have polymorphic command objects that have some notion of state.

Square Peg, Round Hole – When Not to Use

As always, YAGNI applies. For example, if our document builder were only ever going to be responsible for adding nodes, this pattern would have been overkill. Don’t go implementing the command pattern on any and all actions that your program may take — the pattern incurs complexity overhead in the form of multiple objects and a group of polymorphs.

So What? Why is this Better?

As we’ve seen above, this makes code a lot cleaner in situations where it’s relevant and it makes it conform to best practices (SOLID principles). I believe that you’ll also find that, if you get comfortable with this pattern, you’ll be more inclined to offer richer functionality to clients on actions that they may take.

That is, implementing undo/redo or atomicity may be something you’d resist or avoid as it would entail a great deal of complexity, but once you see that this need not be the case, you might be more willing or even proactive about it.

In, using this pattern where appropriate is better because it provides for cleaner code, fewer maintenance headaches, and more clarity.


Ubuntu and Belkin Dongles Revisited

Previously, I posted about odyssey to get belkin wireless dongles working with Ubuntu. Actually, the previous post was tame compared to what I’ve hacked together with these things over the years, including getting them to work on Damn Small Linux where I had to ferret out the text for the entire wpa_supplicant configuration using kernel messages from demsg. But, I digress.

I’m in the middle of creating an ad-hoc “music throughout the house” setup for my home automation, and this involves a client computer in most rooms in the house. Over the years, I’ve accepted donations of computers that range in manufacture date from 1995 to 2008, and these are perfect for my task. Reappropriated and freed from their Windows whatever, they run ably if not spectacularly with XUbunutu (and, in some cases DSL or Slackware when that’s too much for a machine that maxes out at 64 meg of RAM).

So, I have this setup in most rooms, and I just remodeled my basement, which was the last room to get the setup. I had one of these things working with the dongle and everything, but the sound card was this HP Pavilion special that was integrated with a fax card or something, and the sound just wasn’t happening. So, after sort of borking it while trying to configure, I scrapped the effort and reappropriated an old Dell.

Each time I do this, I grab the latest and greatest Ubuntu, and this time was no different. Each time, I check to see if maybe, just maybe, I won’t have to pull the Belkin drivers off of the CD and use ndiswrapper, and lo and behold, this was the breaking point – I finally didn’t.

I wish I could say it worked out of the box, but alas, not quite. I plugged in the dongle and the network manager popped up, and sure enough it was detecting wireless networks, but when I put in all of my credentials, it just kept prompting me for a password. I remembered that Network manager had difficulty with these cards and WPA-PSK security protocol, so I tried another network manager: wicd. Bam! Up and running.

So, for those keeping score at home, if you have Ubuntu 11.10 (Ocelot) and a belkin dongle, all you need to do is:

sudo apt-get install --reinstall wicd
sudo service network-manager stop
sudo apt-get remove --purge network-manager network-manager-gnome
sudo service wicd restart

And, that’s it. You should be twittering and facebooking and whatever else in no time.


Since making this post, I set up another machine in this fashion, and realized that I made an important omission. The wicd wireless setup did not just work out of the box with WPA2. I had to modify my /etc/network/interfaces file to look like this:

auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet static
address {my local IP}
wpa-driver wext
wpa-ssid {my network SSID}
wpa-ap-scan 2
wpa-proto WPA RSN
wpa-pairwise TKIP CCMP
wpa-group TKIP CCMP
wpa-key-mgmt WPA-PSK
wpa-psk {my encrypted key}

For my network, I use static IPs and this setup was necessary to get that going as well as the encryption protocol. Without this step, the setup I mentioned above does not work out of the box — wicd continuously fails with a “bad password” message. Adding this in fixed it.



Poor Man’s Automoq in .NET 4

So, the other day I mentioned to a coworker that I was working on a tool that would wrap moq and provide expressive test double names. I then mentioned the tool AutoMoq, and he showed me something that he was doing. It’s very simple:

The concept here is very simple. If you’re using a dependency injection scheme (manual or IoC container), your classes may evolve to need additional dependencies, which sucks if you’re declaring them inline in every method. This means that you need to engage in a flurry of find/replace, which is a deterrent from changing your constructor signature, even when that’s the best way to go.

This solution, while not perfect, definitely eliminates that in a lot of cases. The BuildTarget() method takes an optional parameter (hence the requirement for .NET 4) for each constructor parameter, and if said parameter is not supplied, it creates a simple mock.

There are some obvious shortcomings — if you remove a reference from your constructor, you still have to go through all of your tests and remove the extra setup code for that dependency, for instance, but this is still much better than the old fashioned way.

I’ve now adopted this practice where suitable and am finding that I like it.

(ExtendedAssert is a utility that I wrote to address what I consider shortcomings in MSTest).


Getting Started with Home Automation

I’m going to be doing a series of posts on home automation, starting out targeting beginner concepts and getting more in depth from there. My hope is that when these are complete, someone with some technical and home improvement acumen can read back through the series as an instruction manual of sorts.

What Is Home Automation?

Home automation is somewhat hard to define. Out of curiosity, I poked around and found as many different definitions as places offering definitions. The definition I most liked came from ehow:

Home automation [allows] individuals to automatically control appliances and security systems within their home through the use [of] technology.

Other sites talked specifically about the use of computers and various products, but this one is nice and general. To my way of thinking, home automation is the use of any technology that helps automate tasks in the home. This may include turning on lights, starting appliances, opening blinds, etc. So, anything from the “home of the future” to The Clapper can be considered home automation.

A Brief History

The concept of home automation has been around for a long time. In the early 1900s, the “house of the future” was the stuff of speculation at world fairs and in the studios of inventors. No doubt many interesting concepts came out of that, but nothing particularly interesting for our purposes here (though one might pedantically argue that appliances such as dishwashers or devices like thermostats are a form of home automation). As the 1900’s wore on, the concept of remotely controllable devices, such as televisions emerged, providing a relatively early snapshot.

In 1975, a Scottish company called Pico Electronics developed the X10 protocol. This was a way to use existing electricity wiring within a house for communication between a sender and a receiver. This protocol was used to transmit simple messages across the wire. A controller could send an “On” message and a device elsewhere in the house would receive this message and execute some appropriate action. For exmaple, a “lamp module” plugged into a wall and with a lamp plugged into it could turn on the lamp at the request of a signal sent from another room.

Over the course of time, the uses of X10 technology expanded from simple on and off to signals allowing control over home security, heating, air condition and ventilation (HVAC) and other home technologies. X10 is reliable and established, but it does have some limits, and those limits have become more obvious lately as the number of devices using house power have skyrocketed. Devices, especially modern ones tend to produce “noise” on the electrical lines, and the more devices we plug in the more noise is generated.

A number of other protocols and technologies have emerged as a result of this, including Insteon, Z-Wave, Lutron and more. And, there is still X10 itself, which is a little confusing as it is both the name of a protocol and the name of an organization that sells devices that implement the protocol. The newcomers tend either to use a different protocol over the electrical system (some being “backward” compatible with X10 and others not) or else to use wifi communication. Often these are more effective than the original X10, but also pricier.

For a time in the 80’s and 90’s, big box stores like Radio Shack and Home Depot carried X10 products, but that seems not to be the case anymore. Some of them now carry the higher end competitors such as Lutron and Insteon. But, if one is interested in purchasing any of these devices, you can find them in many places for ordering online, including ebay.

The fact that you don’t find these items for sale in big box stores does not mean that the home automation trend has cooled off, per se. As society expects more and more things to be automated, the home is no exception. The reason that these items are not carried so much anymore, in my opinion, is that the average consumer is not a combination of electrical engineer and carpenter. People want devices that they can plug in and have “just work” with a minimum of configuration. So, people hire contractors to wire these sorts of things up for them, rather than simply buying them at the local hardware store.

Our first crack at home automation..

So for anyone still reading, sold, and ready to jump in, I will introduce a first home automation project that you can execute as an absolute beginner. You’re going to buy two items, and it’s going to cost roughly $25 to $30, depending on where you order. One item is a keychain, and the other is an X10 “lamp module” with a wireless transceiver. They are pictured together here:

(You can buy this setup on Amazon for $30 at the time of writing, though a quick google search showed prices as low as $16, though that may be omitting a shipping charge).

When you get the devices in the mail, take out the lamp module and observe that it has a red dial on it. The red dial corresponds to the “house” code, one of 16 letters. All X10 devices have a house code and a unit code, and these together form the “address” of the device. The house code, as mentioned, is one of 16 letters, and the unit code is one of 16 numbers. This means that X10 addresses are A1, D12, J4, etc. Your lamp module has all available house codes, but only has unit codes “1” and “9”. These are the unit codes that it will respond to if you were sending commands over the electricity, but it will respond to any unit code sent wirelessly, which is how your remote will work.

If you now take out your remote and its instruction manual, you will see that you can set it to send signals to any house code and unit code. The unit code is essentially irrelevant here for your purposes. You just want to make sure the house code matches the lamp module’s. At this point in your home automation ventures, these are the only devices you have, so just leave them both at house code A.

Now, plug a lamp into the lamp module and the module into the wall. You should now be able to turn your light on and off using the keychain. The range on this should be comparable to that of your home wifi, so you could turn the lights on in your house from your car in the driveway or garage, which is handy.

So, to recap, you can basically just unwrap the devices and set them up without messing with the unit or house codes at all, since your lack of other home automation stuff means you don’t need to worry about compatibility. You now have home automation going for $25 or $30. If you’re interested in doing more, no worries – I’ll have plenty more segments on this.

More Info

For now, I’ll leave off with a series of links that I’ve found over the course of time that will hopefully be helpful, but not too overwhelming.

And, no worries if I haven’t covered all bases – I’ll have plenty more posts.