DaedTech

Stories about Software

By

Setting Up Spring MVC 3.0

Why Spring MVC?

It’s been a while since I’ve done a lot with Java. I’ve been writing an Android app and see and interact with just enough Java not to forget what it looks like, but for the last couple of years, I’ve mainly worked in .NET with C#. Today, I started on actual development of my home automation server in earnest (will be added to github shortly). One of the main design goals of this home automation effort is to support affordable solutions and, toward that end, I am designing it to run on bare bones Linux machines, thus allowing old computers to be re-appropriated to run it.

This is the driving force in my choice of implementation tools. It needs to be runnable on Linux and Windows, and to have a small footprint. But, it also needs to support a true object oriented design paradigm and rich server side functionality. So, I will be dusting off my J2EE and using Spring MVC and Java for the server itself.

Setting up Spring MVC 3.0

I’ve been spoiled by developing principally in .NET over the last couple of years. In that world, any kind of project is usually a Visual Studio install and a plugin or NuGet package away. In the open source world of Spring and Java, it’s not quite as straightforward. My first step was, of course, a hello world app. I have plenty of Spring MVC/J2EE experience, but I was last developing with Spring when it was version 1.x, and we’re a few years removed and on 3.1, so I’m basically starting all over.

I already had Eclipse and Tomcat installed, and I set about finding an Eclipse plugin for creating a sample spring project or a tutorial on the same. I didn’t really find either. The most helpful thing I found, by far, was this blog post. If you take steps to satisfy the preconditions listed and follow the blog itself, you’ll be most of the way there.

I had to take two additional steps to get my new Spring “Hello World” project up and running. I had to get commons-logging.jar from the spring framework that I had downloaded and put it into my little app’s Web-INF\lib folder. I then had to do the same with jstl.jar from my Tomcat installation. Only after doing that was Hello World up and running.

Hopefully, this saves someone reading some time.

By

Command

Quick Information/Overview

Pattern Type Behavioral
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Easy – Moderate

Up Front Definitions

  1. Invoker: This object services clients by exposing a method that takes a command as a parameter and invoking the command’s execute
  2. Receiver: This is the object upon which commands are performed – its state is mutated by them

The Problem

Let’s say you get a request from management to write an internal tool. A lot of people throughout the organization deal with XML documents and nobody really likes dealing with them, so you’re tasked with writing an XML document builder. The user will be able to type in node names and pick where they go and whatnot. Let’s also assume (since this post is not about the mechanics of XML) that all XML documents consist of a root node called “Root” and only child nodes of root.

The first request that you get in is the aforementioned adding. So, knowing that you’ll be getting more requests, your first design decision is to create a DocumentBuilder class and have the adding implemented there.

/// <summary>This class is responsible for doing document build operations</summary>
public class DocumentBuilder
{
    /// <summary>This is the document that we're doing to be modifying</summary>
    private readonly XDocument _document;

    /// <summary>Initializes a new instance of the DocumentBuilder class.</summary>
    /// <param name="document"></param>
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// <summary>Add a node to the document</summary>
    /// <param name="elementName"></param>
    public void AddNode(string elementName)
    {
        _document.Root.Add(new XElement(elementName));
    }
}

//Client code:
var myInvoker = new DocumentBuilder();
myInvoker.AddNode("Hoowa!");

So far, so good. Now, a request comes in that you need to be able to do undo and redo on your add operation. Well, that takes a little doing, but after 10 minutes or so, you’ve cranked out the following:

public class DocumentBuilder
{
    /// <summary>This is the document that we're doing to be modifying</summary>
    private readonly XDocument _document;

    /// <summary>Store things here for undo</summary>
    private readonly Stack<string> _undoItems = new Stack<string>();

    /// <summary>Store things here for redo</summary>
    private readonly Stack<string> _redoItems = new Stack<string>();

    /// <summary>Initializes a new instance of the DocumentBuilder class.</summary>
    /// <param name="document"></param>
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// <summary>Add a node to the document</summary>
    /// <param name="elementName"></param>
    public void AddNode(string elementName)
    {
        _document.Root.Add(new XElement(elementName));
        _undoItems.Push(elementName);
        _redoItems.Clear();
    }

    /// <summary>Undo the previous steps operations</summary>
    public void Undo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myName = _undoItems.Pop();
            _document.Root.Elements(myName).Remove();
            _redoItems.Push(myName);
        }
    }

    /// <summary>Redo the number of operations given by steps</summary>
    public void Redo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myName = _redoItems.Pop();
            _document.Root.Add(new XElement(myName));
            _undoItems.Push(myName);
        }
    }
}

Not too shabby – things get popped from each stack and added to the other as you undo/redo, and the redo stack gets cleared when you start a new “branch”. So, you’re pretty proud of this implementation and you’re all geared up for the next set of requests. And, here it comes. Now, the builder must be able to print the current document to the console. Hmm… that gets weird, since printing to the console is not really representable by a string in the stacks. The first thing you think of doing is making string.empty represent a print operation, but that doesn’t seem very robust, so you tinker and modify until you have the following:

public class DocumentBuilder
{
    /// <summary>This is the document that we're doing to be modifying</summary>
    private readonly XDocument _document;

    /// <summary>This defines what type of operation that we're doing</summary>
    private enum OperationType
    {
        Add,
        Print
    }

    /// <summary>Store things here for undo</summary>
    private readonly Stack<Tuple<OperationType, string>> _undoItems = new Stack<Tuple<OperationType, string>>();

    /// <summary>Store things here for redo</summary>
    private readonly Stack<Tuple<OperationType, string>> _redoItems = new Stack<Tuple<OperationType, string>>();

    /// <summary>Initializes a new instance of the DocumentBuilder class.</summary>
    /// <param name="document"></param>
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// <summary>Add a node to the document</summary>
    /// <param name="elementName"></param>
    public void AddNode(string elementName)
    {
        _document.Root.Add(new XElement(elementName));
        _undoItems.Push(new Tuple<OperationType, string>(OperationType.Add, elementName));
        _redoItems.Clear();
    }

    /// <summary>Print out the document</summary>
    public void PrintDocument()
    {
        Print();

        _redoItems.Clear();
        _undoItems.Push(new Tuple<OperationType, string>(OperationType.Print, string.Empty));
    }

    /// <summary>Undo the previous steps operations</summary>
    public void Undo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myOperation = _undoItems.Pop();
            switch (myOperation.Item1)
            {
                case OperationType.Add:
                    _document.Root.Elements(myOperation.Item2).Remove();
                    _redoItems.Push(myOperation);
                    break;
                case OperationType.Print:
                    Console.Out.WriteLine("Sorry, but I really can't undo a print to screen.");
                    _redoItems.Push(myOperation);
                    break;
            }
        }
    }

    /// <summary>Redo the number of operations given by steps</summary>
    public void Redo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myOperation = _redoItems.Pop();
            switch (myOperation.Item1)
            {
                case OperationType.Add:
                    _document.Root.Elements(myOperation.Item2).Remove();
                    _undoItems.Push(myOperation);
                    break;
                case OperationType.Print:
                    Print();
                    _undoItems.Push(myOperation);
                    break;
            }
        }
    }

    private void Print()
    {
        var myBuilder = new StringBuilder();
        Console.Out.WriteLine("\nDocument contents:\n");
        using (var myWriter = XmlWriter.Create(myBuilder, new XmlWriterSettings() { Indent = true, IndentChars = "\t" }))
        {
            _document.WriteTo(myWriter);
        }
        Console.WriteLine(myBuilder.ToString());
    }
}

Yikes, that’s starting to smell a little. But, hey, you extracted a method for the print, and you’re keeping things clean. Besides, you’re fairly proud of your little tuple scheme for recording what kind of operation it was in addition to the node name. And, there’s really no time for 20/20 hindsight because management loves it. You need to implement something that lets you update a node’s name ASAP.

Oh, and by the way, they also want to be able to print the output to a file instead of the console. Oh, and by the by the way, you know what would be just terrific? If you could put something in to switch the position of two nodes in the file. They know it’s a lot to ask right now, but you’re a rock star and they know you can handle it.

So, you buy some Mountain Dew and pull an all nighter. You watch as the undo and redo case statements grow vertically and as your tuple grows horizontally. The tuple now has an op code and an element name like before, but it has a third argument that means the new name for update, and when the op-code is swap, the second and third arguments are the two nodes to swap. It’s ugly (so ugly I’m not even going to code it for the example), but it works.

And, it’s a success! Now, the feature requests really start piling up, and not only are stakeholders using your app, but other programmers have started using your API. There’s really no time to reflect on the design now – you have a ton of new functionality to implement. And, as you do it, the number of methods in your builder will grow as each new feature is added, the size of the case statements in undo and redo will grow with each new feature is added, and the logic for parsing your swiss-army knife tuple is going to get more and more convoluted.

By the time this thing is feature complete, it’s going to take a 45 page developer document to figure out what on Earth is going on. Time to start putting out resumes and jump off this sinking ship.

So, What to Do?

Before discussing what to do, let’s first consider what went wrong. There are two main issues here that have contributed to the code rot. The first and most obvious is the decision to “wing it” with the Tuple solution that is, in effect, a poor man’s type. Instead of a poor man’s type, why not an actual type? The second issue is a bit more subtle, but equally important — violation of the open/closed principle.

To elaborate, consider the original builder that simply added nodes to the XDocument and the subsequent change to implement undo and redo of this operation. By itself, this was fine and cohesive. But, when the requirements started to come in about more operations, this was the time to go in a different design direction. This may not be immediately obvious, but a good question to ask during development is “what happens if I get more requests like this?” When the class had “AddNode”, “Undo” and “Redo”, and the request for “PrintDocument” came in, it was worth noting that you were cobbling onto an existing class. It also would have been reasonable to ask, “what if I’m asked to add more operations?”

Asking this question would have resulted in the up-front realization that each new operation would require another method to be added to the class, and another case statement to be added to two existing methods. This is not a good design — especially if you know more such requests are coming. Having an implementation where new the procedure for accommodating new functionality is “tack another method onto class X” and/or “open method X and add more code” is a recipe for code rot.

So, let’s consider what we could have done when the request for document print functionality. Instead of this tuple thing, let’s create another implementation. What we’re going to do is forget about creating Tuples and forget about the stacks of string, and think in terms of a command object. Now, at the moment, we only have one command object, but we know that we’ve got a requirement that’s going to call for a second one, so let’s make it polymorphic. I’m going to introduce the following interface:

public interface IDocumentCommand
{
    /// <summary>Document (receiver) upon which to operate</summary>
    XDocument Document { get; set; }

    /// <summary>Execute the command</summary>
    void Execute();
        
    /// <summary>Revert the execution of the command</summary>
    void UndoExecute();

}

This is what will become the command in the command pattern. Notice that the interface defines two conceptual methods – execution and negation of the execution (which should look a lot like “do” and “undo”), and it’s also going to be given the document upon which to do its dirty work.

Now, let’s take a look at the add implementer:

public class AddNodeCommand : IDocumentCommand
{
    private readonly string _nodeName;

    private XDocument _document = new XDocument();
    public XDocument Document { get { return _document; } set { _document = value ?? _document; } }

    /// <summary>Initializes a new instance of the AddNodeCommand class.</summary>
    /// <remarks>Note the extra parameter here -- this is important.  This class essentially conceptually
    /// an action, so you're more used to seeing things in method form like this.  We pass in the "method" parameters
    /// to the constructor because we're encapsulating an action as an object with state</remarks>
    public AddNodeCommand(string nodeName)
    {
        _nodeName = nodeName ?? string.Empty;
    }
    public void Execute()
    {
        Document.Root.Add(new XElement(_nodeName));
    }

    public void UndoExecute()
    {
        Document.Root.Elements(_nodeName).Remove();
    }
}

Pretty straightforward (in fact a little too straightforward – in a real implementation, there should be some error checking about the state of the document). When created, this object is seeded with the name of the node that it’s supposed to create. The document is a setter dependency, and the two operations mutate the XDocument, which is our “receiver” in the command pattern, according to the pattern’s specification.

Let’s have a look at what our new Builder implementation now looks like before adding print document:

public class DocumentBuilder
{
    /// <summary>This is the document that we're doing to be modifying</summary>
    private readonly XDocument _document;

    /// <summary>Store things here for undo</summary>
    private readonly Stack<IDocumentCommand> _undoItems = new Stack<IDocumentCommand>();

    /// <summary>Store things here for redo</summary>
    private readonly Stack<IDocumentCommand> _redoItems = new Stack<IDocumentCommand>();

    /// <summary>Initializes a new instance of the DocumentBuilder class.</summary>
    /// <param name="document"></param>
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// <summary>Add a node to the document</summary>
    /// <param name="elementName"></param>
    public void AddNode(string elementName)
    {
        var myCommand = new AddNodeCommand(elementName)  { Document = _document };
        myCommand.Execute();
        _undoItems.Push(myCommand);
        _redoItems.Clear();
    }

    /// <summary>Undo the previous steps operations</summary>
    public void Undo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myCommand = _undoItems.Pop();
            myCommand.UndoExecute();
            _redoItems.Push(myCommand);
        }
    }

    /// <summary>Redo the number of operations given by steps</summary>
    public void Redo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myCommand = _redoItems.Pop();
            myCommand.Execute();
            _undoItems.Push(myCommand);
        }
    }
}

Notice that the changes to this class are subtle but interesting. We now have stacks of commands rather than strings (or, later, tuples). Notice that undo and redo now delegate the business of executing the command to the command object, rather than figuring out what kind of operation it is and doing it themselves. This is critical to conforming to the open/closed principle, as we’ll see shortly.

Now that we’ve performed our refactoring, let’s add the print document functionality. This is now going to be accomplished by a new implementation of IDocumentCommand:

public class PrintCommand : IDocumentCommand
{
    private XDocument _document = new XDocument();
    public XDocument Document { get { return _document; } set { _document = value ?? _document; } }

    /// <summary>Execute the print command</summary>
    public void Execute()
    {
        var myBuilder = new StringBuilder();
        Console.Out.WriteLine("\nDocument contents:\n");
        using (var myWriter = XmlWriter.Create(myBuilder, new XmlWriterSettings() { Indent = true, IndentChars = "\t" }))
        {
            Document.WriteTo(myWriter);
        }
        Console.WriteLine(myBuilder.ToString());
    }

    /// <summary>Undo the print command (which, you can't)</summary>
    public void UndoExecute()
    {
        Console.WriteLine("\nDude, you can't un-ring that bell.\n");
    }
}

Also pretty simple. Let’s now take a look at how we implement this in our “invoker”, the DocumentBuilder:

public class DocumentBuilder
{
    /// <summary>This is the document that we're doing to be modifying</summary>
    private readonly XDocument _document;

    /// <summary>Store things here for undo</summary>
    private readonly Stack<IDocumentCommand> _undoItems = new Stack<IDocumentCommand>();

    /// <summary>Store things here for redo</summary>
    private readonly Stack<IDocumentCommand> _redoItems = new Stack<IDocumentCommand>();

    /// <summary>Initializes a new instance of the DocumentBuilder class.</summary>
    /// <param name="document"></param>
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// <summary>Add a node to the document</summary>
    /// <param name="elementName"></param>
    public void AddNode(string elementName)
    {
        var myCommand = new AddNodeCommand(elementName) { Document = _document };
        myCommand.Execute();
        _undoItems.Push(myCommand);
        _redoItems.Clear();
    }

    /// <summary>Print the document</summary>
    public void PrintDocument()
    {
        var myCommand = new PrintCommand() { Document = _document};
        myCommand.Execute();
        _undoItems.Push(myCommand);
        _redoItems.Clear();
    }

    /// <summary>Undo the previous steps operations</summary>
    public void Undo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myCommand = _undoItems.Pop();
            myCommand.UndoExecute();
            _redoItems.Push(myCommand);
        }
    }

    /// <summary>Redo the number of operations given by steps</summary>
    public void Redo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myCommand = _redoItems.Pop();
            myCommand.Execute();
            _undoItems.Push(myCommand);
        }
    }
}

Lookin’ good! Observe that undo and redo do not change at all. Our invoker now creates a command for each operation, and delegate its work to the receiver on behalf of the client code. As we continue to add more commands, we do not ever have to modify undo and redo.

But, we still don’t have it quite right. The fact that we need to add a new class and a new method each time a new command is added is still a violation of the open/closed principle, even though we’re better off than before. The whole point of what we’re doing here is separating the logic of command execution (and undo/redo and, perhaps later, indicating whether a command can currently be executed or not) from the particulars of the commands themselves. We’re mostly there, but not quite – the invoker, DocumentBuilder is still responsible for enumerating the different commands as methods and creating the actual command objects. The invoker is still too tightly coupled to the mechanics of the commands.

This is not hard to fix – pass the buck! Let’s look at an implementation where the invoker, instead of creating commands in named methods, just demands the commands:

public class DocumentBuilder
{
    /// <summary>This is the document that the user will be dealing with</summary>
    private readonly XDocument _document;

    /// <summary>This houses commands for undo</summary>
    private readonly Stack<IDocumentCommand> _undoCommands = new Stack<IDocumentCommand>();

    /// <summary>This houses commands for redo</summary>
    private readonly Stack<IDocumentCommand> _redoCommands = new Stack<IDocumentCommand>();

    /// <summary>User can give us an xdocument or we can create our own</summary>
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// <summary>Executes the given command</summary>
    /// <param name="command"></param>
    public void Execute(IDocumentCommand command)
    {
        if (command == null) throw new ArgumentNullException("command", "nope");
        command.Document = _document;
        command.Execute();
        _redoCommands.Clear();
        _undoCommands.Push(command);
    }

    /// <summary>Perform the number of undos given by iterations</summary>
    public void Undo(int iterations)
    {
        for (int index = 0; index < iterations; index++)
        {
            if (_undoCommands.Count > 0)
            {
                var myCommand = _undoCommands.Pop();
                myCommand.UndoExecute();
                _redoCommands.Push(myCommand);
            }
        }
    }

    /// <summary>Perform the number of redos given by iterations</summary>
    public void Redo(int iterations)
    {
        for (int index = 0; index < iterations; index++)
        {
            if (_redoCommands.Count > 0)
            {
                var myCommand = _redoCommands.Pop();
                myCommand.UndoExecute();
                _undoCommands.Push(myCommand);
            }
        }
    }
}

And, there we go. Observe that now, when new commands are to be added, all a maintenance programmer has to do is author a new class. That’s a much better paradigm. Any bugs related to the mechanics of do/undo/redo are completely separate from the commands themselves.

Some might argue that the new invoker/DocumentBuilder lacks expressiveness in its API (having Execute(IDocumentCommand) instead of AddNode(string) and PrintDocument()), but I disagree:

var myInvoker = new DocumentBuilder();
myInvoker.Execute(new AddNodeCommand("Hoowa"));
myInvoker.Execute(new PrintCommand());
myInvoker.Undo(2);
myInvoker.Execute(new PrintCommand());

Execute(AddCommand(nodeName)) seems just as expressive to me as AddNode(nodeName), if slightly more verbose. But even if it’s not, the tradeoff is worth it, in my book. You now have the ability to plug new commands in anytime by implementing the interface, and DocumentBuilder conforms to the open/closed principle — it’s only going to change if there is a bug in the way the do/undo/redo logic is found and not when you add new functionality (incidentally, having only one reason to change also makes it conform to the single responsibility principle).

A More Official Explanation

dofactory defines the command pattern this way:

Encapsulate a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations.

The central, defining point of this pattern is the idea that a request or action should be an object. This is an important and not necessarily intuitive realization. The natural tendency would be to implement the kind of ad-hoc logic from the initial implementation, since we tend to think of objects as things like “Car” and “House” rather than concepts like “Add a node to a document”.

But, this different thinking leads to the other part of the description – the ability to parameterize clients with different requests. What this means is that since the commands are stored as objects with state, they can encapsulate their own undo and redo, rather than forcing the invoker to do it. The parameterizing is the idea that the invoker operates on passed in command objects rather than doing specific things in response to named methods.

What is gained here is then the ability to put commands into a stack, queue, list, etc, and operate on them without specifically knowing what it is they do. That is a powerful ability since separating and decoupling responsibilities is often hard to accomplish when designing software.

Other Quick Examples

Here are some other places that the command pattern is used:

  1. The ICommand interface in C#/WPF for button click and other GUI events.
  2. Undo/redo operations in GUI applications (i.e. Ctrl-Z, Ctrl-Y).
  3. Implementing transactional logic for persistence (thus providing atomicity for rolling back)

A Good Fit – When to Use

Look to use the command pattern when there is a common set of “meta-operations” surrounding commands. That is, if you find yourself with requirements along the lines of “revert the last action, whatever that action may have been.” This is an indicator that there are going to be operations on the commands themselves beyond simple execution. In scenarios like this, it makes sense to have polymorphic command objects that have some notion of state.

Square Peg, Round Hole – When Not to Use

As always, YAGNI applies. For example, if our document builder were only ever going to be responsible for adding nodes, this pattern would have been overkill. Don’t go implementing the command pattern on any and all actions that your program may take — the pattern incurs complexity overhead in the form of multiple objects and a group of polymorphs.

So What? Why is this Better?

As we’ve seen above, this makes code a lot cleaner in situations where it’s relevant and it makes it conform to best practices (SOLID principles). I believe that you’ll also find that, if you get comfortable with this pattern, you’ll be more inclined to offer richer functionality to clients on actions that they may take.

That is, implementing undo/redo or atomicity may be something you’d resist or avoid as it would entail a great deal of complexity, but once you see that this need not be the case, you might be more willing or even proactive about it.

In, using this pattern where appropriate is better because it provides for cleaner code, fewer maintenance headaches, and more clarity.

By

Ubuntu and Belkin Dongles Revisited

Previously, I posted about odyssey to get belkin wireless dongles working with Ubuntu. Actually, the previous post was tame compared to what I’ve hacked together with these things over the years, including getting them to work on Damn Small Linux where I had to ferret out the text for the entire wpa_supplicant configuration using kernel messages from demsg. But, I digress.

I’m in the middle of creating an ad-hoc “music throughout the house” setup for my home automation, and this involves a client computer in most rooms in the house. Over the years, I’ve accepted donations of computers that range in manufacture date from 1995 to 2008, and these are perfect for my task. Reappropriated and freed from their Windows whatever, they run ably if not spectacularly with XUbunutu (and, in some cases DSL or Slackware when that’s too much for a machine that maxes out at 64 meg of RAM).

So, I have this setup in most rooms, and I just remodeled my basement, which was the last room to get the setup. I had one of these things working with the dongle and everything, but the sound card was this HP Pavilion special that was integrated with a fax card or something, and the sound just wasn’t happening. So, after sort of borking it while trying to configure, I scrapped the effort and reappropriated an old Dell.

Each time I do this, I grab the latest and greatest Ubuntu, and this time was no different. Each time, I check to see if maybe, just maybe, I won’t have to pull the Belkin drivers off of the CD and use ndiswrapper, and lo and behold, this was the breaking point – I finally didn’t.

I wish I could say it worked out of the box, but alas, not quite. I plugged in the dongle and the network manager popped up, and sure enough it was detecting wireless networks, but when I put in all of my credentials, it just kept prompting me for a password. I remembered that Network manager had difficulty with these cards and WPA-PSK security protocol, so I tried another network manager: wicd. Bam! Up and running.

So, for those keeping score at home, if you have Ubuntu 11.10 (Ocelot) and a belkin dongle, all you need to do is:

sudo apt-get install --reinstall wicd
sudo service network-manager stop
sudo apt-get remove --purge network-manager network-manager-gnome
sudo service wicd restart

And, that’s it. You should be twittering and facebooking and whatever else in no time.

Edit:

Since making this post, I set up another machine in this fashion, and realized that I made an important omission. The wicd wireless setup did not just work out of the box with WPA2. I had to modify my /etc/network/interfaces file to look like this:

auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet static
address {my local IP}
gateway 192.168.2.1
dns-namesevers 192.168.2.1
netmask 255.255.255.0
wpa-driver wext
wpa-ssid {my network SSID}
wpa-ap-scan 2
wpa-proto WPA RSN
wpa-pairwise TKIP CCMP
wpa-group TKIP CCMP
wpa-key-mgmt WPA-PSK
wpa-psk {my encrypted key}

For my network, I use static IPs and this setup was necessary to get that going as well as the encryption protocol. Without this step, the setup I mentioned above does not work out of the box — wicd continuously fails with a “bad password” message. Adding this in fixed it.

Cheers

By

Poor Man’s Automoq in .NET 4

So, the other day I mentioned to a coworker that I was working on a tool that would wrap moq and provide expressive test double names. I then mentioned the tool AutoMoq, and he showed me something that he was doing. It’s very simple:

 [TestClass]
    public class FeatureResetServiceTest
    {
        #region Builders

        private static FeatureResetService BuildTarget(IFeatureLocator locator = null)
        {
            var myLocator = locator ?? new Mock<IFeatureLocator>().Object;
            return new FeatureResetService(myLocator);
        }

        /// <summary>If you're going out of your way to pass null instead of empty, something is wrong</summary>
        [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
        public void ResetToDefault_Throws_NullArgumentException()
        {
            var myService = BuildTarget();

            ExtendedAssert.Throws<ArgumentNullException>(() => myService.ResetFeaturesToDefault(null));
        }

The concept here is very simple. If you’re using a dependency injection scheme (manual or IoC container), your classes may evolve to need additional dependencies, which sucks if you’re declaring them inline in every method. This means that you need to engage in a flurry of find/replace, which is a deterrent from changing your constructor signature, even when that’s the best way to go.

This solution, while not perfect, definitely eliminates that in a lot of cases. The BuildTarget() method takes an optional parameter (hence the requirement for .NET 4) for each constructor parameter, and if said parameter is not supplied, it creates a simple mock.

There are some obvious shortcomings — if you remove a reference from your constructor, you still have to go through all of your tests and remove the extra setup code for that dependency, for instance, but this is still much better than the old fashioned way.

I’ve now adopted this practice where suitable and am finding that I like it.

(ExtendedAssert is a utility that I wrote to address what I consider shortcomings in MSTest).

By

Getting Started with Home Automation

I’m going to be doing a series of posts on home automation, starting out targeting beginner concepts and getting more in depth from there. My hope is that when these are complete, someone with some technical and home improvement acumen can read back through the series as an instruction manual of sorts.

What Is Home Automation?

Home automation is somewhat hard to define. Out of curiosity, I poked around and found as many different definitions as places offering definitions. The definition I most liked came from ehow:

Home automation [allows] individuals to automatically control appliances and security systems within their home through the use [of] technology.

Other sites talked specifically about the use of computers and various products, but this one is nice and general. To my way of thinking, home automation is the use of any technology that helps automate tasks in the home. This may include turning on lights, starting appliances, opening blinds, etc. So, anything from the “home of the future” to The Clapper can be considered home automation.

A Brief History

The concept of home automation has been around for a long time. In the early 1900s, the “house of the future” was the stuff of speculation at world fairs and in the studios of inventors. No doubt many interesting concepts came out of that, but nothing particularly interesting for our purposes here (though one might pedantically argue that appliances such as dishwashers or devices like thermostats are a form of home automation). As the 1900’s wore on, the concept of remotely controllable devices, such as televisions emerged, providing a relatively early snapshot.

In 1975, a Scottish company called Pico Electronics developed the X10 protocol. This was a way to use existing electricity wiring within a house for communication between a sender and a receiver. This protocol was used to transmit simple messages across the wire. A controller could send an “On” message and a device elsewhere in the house would receive this message and execute some appropriate action. For exmaple, a “lamp module” plugged into a wall and with a lamp plugged into it could turn on the lamp at the request of a signal sent from another room.

Over the course of time, the uses of X10 technology expanded from simple on and off to signals allowing control over home security, heating, air condition and ventilation (HVAC) and other home technologies. X10 is reliable and established, but it does have some limits, and those limits have become more obvious lately as the number of devices using house power have skyrocketed. Devices, especially modern ones tend to produce “noise” on the electrical lines, and the more devices we plug in the more noise is generated.

A number of other protocols and technologies have emerged as a result of this, including Insteon, Z-Wave, Lutron and more. And, there is still X10 itself, which is a little confusing as it is both the name of a protocol and the name of an organization that sells devices that implement the protocol. The newcomers tend either to use a different protocol over the electrical system (some being “backward” compatible with X10 and others not) or else to use wifi communication. Often these are more effective than the original X10, but also pricier.

For a time in the 80’s and 90’s, big box stores like Radio Shack and Home Depot carried X10 products, but that seems not to be the case anymore. Some of them now carry the higher end competitors such as Lutron and Insteon. But, if one is interested in purchasing any of these devices, you can find them in many places for ordering online, including ebay.

The fact that you don’t find these items for sale in big box stores does not mean that the home automation trend has cooled off, per se. As society expects more and more things to be automated, the home is no exception. The reason that these items are not carried so much anymore, in my opinion, is that the average consumer is not a combination of electrical engineer and carpenter. People want devices that they can plug in and have “just work” with a minimum of configuration. So, people hire contractors to wire these sorts of things up for them, rather than simply buying them at the local hardware store.

Our first crack at home automation..

So for anyone still reading, sold, and ready to jump in, I will introduce a first home automation project that you can execute as an absolute beginner. You’re going to buy two items, and it’s going to cost roughly $25 to $30, depending on where you order. One item is a keychain, and the other is an X10 “lamp module” with a wireless transceiver. They are pictured together here:

(You can buy this setup on Amazon for $30 at the time of writing, though a quick google search showed prices as low as $16, though that may be omitting a shipping charge).

When you get the devices in the mail, take out the lamp module and observe that it has a red dial on it. The red dial corresponds to the “house” code, one of 16 letters. All X10 devices have a house code and a unit code, and these together form the “address” of the device. The house code, as mentioned, is one of 16 letters, and the unit code is one of 16 numbers. This means that X10 addresses are A1, D12, J4, etc. Your lamp module has all available house codes, but only has unit codes “1” and “9”. These are the unit codes that it will respond to if you were sending commands over the electricity, but it will respond to any unit code sent wirelessly, which is how your remote will work.

If you now take out your remote and its instruction manual, you will see that you can set it to send signals to any house code and unit code. The unit code is essentially irrelevant here for your purposes. You just want to make sure the house code matches the lamp module’s. At this point in your home automation ventures, these are the only devices you have, so just leave them both at house code A.

Now, plug a lamp into the lamp module and the module into the wall. You should now be able to turn your light on and off using the keychain. The range on this should be comparable to that of your home wifi, so you could turn the lights on in your house from your car in the driveway or garage, which is handy.

So, to recap, you can basically just unwrap the devices and set them up without messing with the unit or house codes at all, since your lack of other home automation stuff means you don’t need to worry about compatibility. You now have home automation going for $25 or $30. If you’re interested in doing more, no worries – I’ll have plenty more segments on this.

More Info

For now, I’ll leave off with a series of links that I’ve found over the course of time that will hopefully be helpful, but not too overwhelming.

And, no worries if I haven’t covered all bases – I’ll have plenty more posts.

Cheers!

By

Dell Keyboard Failure

The Symptoms

This is yet another entry of the “hope this saves someone time and aggravation” variety. I’m adding a new category called “Lessons Learned” that I’ll be tagging these with going forward.

So, I was playing Civilization V and I stepped away for a few minutes to find the screensaver on when I came back. I moved the mouse and nothing. Enter key and nothing. Ctrl-Alt-Delete, nothing. Several minutes of angrily mashing my keyboard and, not surprisingly, nothing. Given the graphics-heavy nature of Civ V, I assumed that my machine had crashed (this happens sometimes with Civ V), so I rebooted.

When I did that, I was greeted by the image above. So, I figured my wireless keyboard had run out of batteries and I went all the way down to the basement, grumbling, got new triple As and slapped them in. Rebooted, and keyboard failure. Tried a different USB port and keyboard failure. Tried my keyboard in another machine and success! Uh, oh. To the interwebs!

After googling around, I didn’t find any definitive solution. There were some scorched Earth suggestions, my favorite of which was “replace the motherboard”. There were some shotgun suggestions involving running without a CMOS battery, bending USB pins, switching the wireless receiver around from port to port, and various other procedural mumbo-jumbo that had probably coincidentally worked for someone somehow.

The Solution

I got lucky, though. The first scattershot step I took seemed the simplest, and it worked, so I know exactly what worked. I powered down and disconnected from power, and then popped a two pin jumper off of a three pin block on the motherboard for a few seconds and replaced it. Viola!

Appropriate disclaimers about unplugging and discharging capacitors and all that other stuff that I never remember to do but wouldn’t want somehow to be liable for you not doing it. But, that did the trick. Back up and running with no new keyboard, and certainly no new motherboard. Excuse to buy a new machine averted… (d’oh!)

By

Compiling XML! (Not really)

So, here’s another one filed in the “hope this saves someone some grief” category. I was cruising along with my home automation ‘droid app, setting my layouts and wiring up my button click handlers, when all of a sudden, I was getting weird build errors about. I couldn’t run or even build. When I tried to build, I got messages about different things being redefined in my main XML layout file. This was strange, since I hadn’t edited it directly, but was using the graphical layout tool.

As I inspected the errors more closely, I began to understand that there was apparently some rogue XML somewhere. A bug in Eclipse? With the Android SDK plugin? Had I been hacked by someone with a very strange set of motives? I opened up my layout folder and this is what I saw:

Wat?

After some googling around and experimentation, I discovered that this file is generated if you run with the Play icon and the XML file open as your selected window in the editor. Perhaps that’s something that Eclipse users are used to, but coming from a pretty solid couple of years of Visual Studio, this had me mystified. So, lesson learned. Don’t run if you have XML open in Eclipse (or anything else you don’t want slapped with a .out between filename and extension and apparently included in compilation.

Cheers :)

By

Android: Let There Be Internet!

I’ve been a little lax in documenting my experience as a neophyte Android developer, and for that I apologize. Tonight, I have a quick entry that will hopefully save you some time.

I’m working on an open source home automation server. I’ve had a prototype functional for a couple of years now that runs as a web server in apache, Java-based, and controls the lights through a web interface front end and a low level backend that interfaces with the house’s electrical system. I control this through any computer/phone/ipod/wii/etc that’s hooked to my home wifi, using the browser.

Recently, I’ve wet my beak a little with Android development, out of curiosity, and so my mission tonight was to take the layout I’d been working on and get it to, you know, actually do something. So, the simplest thing for me to do was have the app reproduce the POST request sent by the browsers to get the desired result. Here is the code I slapped together for this:

final Button myButton = (Button) findViewById(R.id.breakfastNookButton);
        myButton.setOnClickListener(new View.OnClickListener() {
            public void onClick(View v) {
            	
            	HttpClient myClient = new DefaultHttpClient();
                HttpPost myPost = new HttpPost("you-get-the-idea");

                try {
                	myClient.execute(myPost);                  

                } catch (ClientProtocolException e) {
                    e.printStackTrace();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        });

Not the prettiest thing I’ve ever written, but this is a throw-away prototype to prove the concept (though I’m going to refactor tomorrow – I can’t help myself, prototype or not).

So, I fired it up and nothing happened. I checked out the stack trace and was getting an UnknownHostExeption, which didn’t make sense to me, since I was using the home automation server’s IP address. I used the browser on my phone, and I could turn the light off. I googled around a bit and found a bunch of information about things that can go wrong with the emulator, but I’m debugging right on my phone since the Emulator is painfully slow.

Finally, I stumbled across a suggestion and got it right through some experimentation. I needed to give the app permission to use the internet! So, I updated my manifest to the following:

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
      package="daedtech.LightController"
      android:versionCode="1"
      android:versionName="1.0">
     <uses-permission android:name="android.permission.INTERNET" />
    <application android:icon="@drawable/icon" android:label="@string/app_name" android:debuggable="true">
...

Important line is the “uses-permissions” node, and, viola, let there be dark! My light turned off. Hope that helps someone out there struggling to understand the UnknownHostException for a known host.

(Note, the node level in the XML file is important — it must be a child of manifest, NOT application).

By

Poor Man’s Code Contracts

What’s Wrong with Code Contracts?!?

Let me start out by saying that I really see nothing wrong with code contracts, and what I’m offering is not intended as any kind of replacement for them in the slightest. Rather, I’m offering a solution for a situation where contracts are not available to you. This might occur for any number of reasons:

  1. You don’t know how to use them and don’t have time to learn.
  2. You’re working on a legacy code base and aren’t able to retrofit wholesale or gradually.
  3. You don’t have approval to use them in a project to which you’re contributing.

Let’s just assume that one of these, or some other consideration I hadn’t thought of is true.

The Problem

If you’re coding defensively and diligent about enforcing preconditions, you probably have a lot of code like this:

public void DoSomething(Foo foo, Bar, bar)
{
  if(foo == null)
  {
    throw new ArgumentNullException("foo");
  }
  if(bar == null)
  {
    throw new ArgumentException("bar");
  }

  //Finally, get down to business...
}

With code contracts, you can compact that guard code and make things more readable:

public void DoSomething(Foo foo, Bar, bar)
{
  Contract.Requires(foo != null);
  Contract.Requires(bar != null);

  //Finally, get down to business...
}

I won’t go into much more detail here — I’ve blogged about code contracts in the past.

But, if you don’t have access to code contracts, you can achieve the same thing, with even more concise syntax.

public void DoSomething(Foo foo, Bar, bar)
{
  _Validator.VerifyParamsNonNull(foo, bar);

  //Finally, get down to business...
}

The Mechanism

This is actually pretty simple in concept, but it’s something that I’ve found myself using routinely. Here is an example of what the “Validator” class looks like in one of my code bases:

    public class InvariantValidator : IInvariantValidator
    {
        /// <summary>Verify a (reference) method parameter as non-null</summary>
        /// <param name="argument">The parameter in question</param>
        /// <param name="message">Optional message to go along with the thrown exception</param>
        public virtual void VerifyNonNull<T>(T argument, string message = "Invalid Argument") where T : class
        {
            if (argument == null)
            {
                throw new ArgumentNullException("argument", message);
            }
        }

        /// <summary>Verify a parameters list of objects</summary>
        /// <param name="arguments"></param>
        public virtual void VerifyParamsNonNull(params object[] arguments)
        {
            VerifyNonNull(arguments);

            foreach (object myParameter in arguments)
            {
                VerifyNonNull(myParameter);
            }
        }

        /// <summary>Verify that a string is not null or empty</summary>
        /// <param name="target">String to check</param>
        /// <param name="message">Optional parameter for exception message</param>
        public virtual void VerifyNotNullOrEmpty(string target, string message = "String cannot be null or empty.")
        {
            if (string.IsNullOrEmpty(target))
            {
                throw new InvalidOperationException(message);
            }
        }
    }

Pretty simple, huh? So simple that you might consider not bothering, except…

Except that for me, personally, anything that saves lines of code, repeat typing, and cyclomatic complexity is good. I’m very meticulous about that. Think of every place in your code base that you have an if(foo == null) throw paradigm, and add one to a cyclomatic complexity calculator. This is order O(n) on the number of methods in your code base. Contrast that to 1 in this code base. Not O(1), but actually 1.

I also find that this makes my methods substantially more readable at a glance, partitioning the method effectively into guard code and what you actually want to do. The vast majority of the time, you don’t care about the guard code, and don’t really have to think about it in this case. It doesn’t occupy your thought briefly as you figure out where the actual guts of the method are. You’re used to seeing a precondition/invariant one-liner at the start of a method, and you immediately skip it unless it’s the source of your issue, in which case you inspect it.

I find that streamlined contexting to be valuable. There’s a clear place for the guard code and a clear place for the business logic, and I’m used to seeing them separated.

Cross-Cutting Concerns

Everything I said above is true of Code Contracts as well as my knock off. Some time back, I did some research on Code Contracts and during the course of that project, we devised a way to have Code Contracts behave differently in debug mode (throwing exceptions) than in release mode (supplying sensible defaults). This was part of an experimental effort to wrap simple C# classes and create versions that the “no throw guarantee”.

But, Code Contracts work with explicit static method calls. With this interface validator, I can use an IoC container define run-time configurable, cross cutting behavior on precondition/invariant violations. That creates a powerful paradigm where, in some cases, I can throw exceptions, in other cases, I can log and throw, or in still other cases, I can do something crazy like pop up message boxes. The particulars don’t matter so much as the ability to plug in a behavior at configuration time and have it cross-cut throughout the application. (Note, this is only possible if you make your Validator an injectable dependency).

Final Thoughts

So, I thought that was worth sharing. It’s simple — perhaps so simple as to be obvious — but I’ve gotten a lot of mileage out of it in scenarios where I couldn’t use contracts, and sometimes I find myself even preferring it. There’s no learning curve, so other people don’t look at it and say things like “where do I download Code Contracts” or “what does this attribute mean?” And, it’s easy to fully customize. Of course, this does nothing about enforcing instance level invariants, and to get the rich experience of code contracts, you’d at the very least need to define some kind of method that accepted a delegate type for evaluation, but this is, again, not intended to replace contracts.

Just another tool for your arsenal, if you want it.

By

Scheduled Task Problems And Solutions in Windows 7

This is going here for my own sake as much as anything. I’ve been banging my head against a wall for a bit now, trying to figure out why I can’t get a scheduled task to run properly in Windows 7. Two things were going wrong. The first one was that I couldn’t get the start-in setting to work properly. This is neatly explained and addressed in this post.

Long story short, unlike everywhere else in Windows, the “start-in” text box randomly doesn’t demand or even support quotes around directory names with spaces in the paths. So, just remove the spaces.

Second issue I was having was that I was running a task to execute a program I had written that reads an XML config file to point it to files to use. One of these files was addressed by a mapped network drive. This worked fine when I ran the actual executable, but after a lot of bad noise and experimenting, I discovered that it wouldn’t work with the scheduled task. For the scheduled task, I had to use the UNC path to the file in my XML configuration file. I can only speculate that this has something to do with the scheduler not being tightly coupled enough to owning user to share network drives or something.

In the end, doesn’t matter. It’s kind of weird, but for anyone who finds themselves in that edge-case situation, try dealing with UNC exclusively.

Acknowledgements | Contact | About | Social Media