DaedTech

Stories about Software

By

Setting Up AJAX and JQuery

AjaxSo, in response to feedback from my previous post about my   home automation server site, I’ve decided to incorporate AJAX and JQuery into my solution. This is partially because it’s The Right Thing ™ and partially because I’m a sucker for the proposition of learning a new technology. So, here are the steps I took toward going from my previous solution to a working one using AJAX, including downloads and other meta-tasks.

The first thing that I did was poke around until I found this tutorial, which is close enough to what I to do to serve as a blueprint. I found it very clear and helpful, but I realized that I had some legwork to do. I setup my java code as per the tutorial, but on the client side in JSP, I realized things wouldn’t work since I couldn’t very well source a jquery library that didn’t exist in the workspace. I poked around on my computer a bit and saw that I did have various jquery.js libraries, but they were part of Visual Studio, Android, and other concerns, so I figured I’d download one specifically for this task rather than try to reappropriate.

So, I went to jquery.com. I poked around a bit until I found the most recent “minified” version, since that’s what the example was using, and discovered it here. I was a little surprised to find that the ‘download’ would consist of copying it and pasting it into a local text file that I named myself, but I guess, what did I expect – this is a scripted language executed by browsers, not a C++ compiler or the .NET framework or something.

In Eclipse, I made a directory under my WebContent folder called “js”, and I put my new jquery-1.7.1.min.js file in it. Now, I had something to link to in my JSP page. Here is the actual link that I put in:

<script type="text/javascript" src="js/jquery/jquery-1.7.1.min.js"></script>

Just to make sure my incremental progress was good, I built and ran on localhost, and

My project now error’d on build and at runtime. For some reason, Eclipse seems not to like the minified version, so I switched to using the non minified. I still got a runtime error in Eclipse browser (though not in Chrome) and the javascript file had warnings in it instead of errors. This was rectified by following the high scoring (though strangely not accepted) answer on this stack overflow post.

However, it was at this point that I started to question how much of this I actually needed. I don’t particularly understand AJAX and JQuery, but I’m under the impression that JQuery is essentially a library that simplifies AJAX and perhaps some other things. The tutorial that I was looking at was describing how to send POST variables and get a response, and how this was easier with JQuery. But I actually don’t need the variables, nor do I need a response at this time. So, given the JQuery runtime errors that were continuing to annoy me, I deleted JQuery from the proiejct and resolved to work this out at a later date. From here, after a bit of poking around, I realized that using AJAX from within Javascript was, evidently, pretty simple. I just needed to instantiate an XMLHttpRequest object and call some methods on it. Here is what I changed my kitchen.jsp page to be:

<script type="text/javascript">
 function off() { 
	var http = new XMLHttpRequest();
 	http.open("POST", "kitchen/off.html", true);
 	http.send("");
 }
 function on() { 
	var http = new XMLHttpRequest();
 	http.open("POST", "kitchen/on.html", true);
 	http.send(""); 
}
</script>

<section id="content">
	<table>
		<tr><td><a href="javascript:void(0)" onClick="on()" class="button">Overhead On</a></td><td><a href="javascript:void(0)" class="button" onClick="off()">Overhead Off</a></td>
	</table>
	</section>
    
</body>
</html>

Pretty simple, though when you have no idea what you’re doing, it takes a while to figure out. :)

I instantiate a request, populate it and send it. Given my RESTful scheme, all the info the server needs is contained in the path of the request itself, so it isn’t necessary for me to parse the page or ask for a response. I added the javascript:void(0) calls so that the buttons would still behave like actual, live links. I think that later, when it is, I’ll probably bring JQuery back and revisit the tutorial that I found. Here is my updated controller class.

@Controller
@RequestMapping("/kitchen")
public class KitchenController {




	@RequestMapping("/kitchen")
    public ModelAndView kitchen() {
    	String message = "Welcome to Daedalus";
        return new ModelAndView("kitchen", "message", message);
    }
	
	@RequestMapping(value="/{command}", method = RequestMethod.POST)
	public @ResponseBody String kitchen(@PathVariable String command) {
		
		try {
			Runtime.getRuntime().exec("/usr/local/heyu-2.6.0/heyu " + command + " A2");
		} catch (IOException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
		
		return "";
	}
}

I’m fairly pleased with the asynchronous setup and the fact that I’m not playing weird games with redirect anymore. I have a unit test project setup, so I think I’m now going to get back to my preference of TDD, and factor the controller to handle more arbitrary requests, making use of an interfaced service for the lights and a data driven model for the different rooms and appliances. I’ve got my eye on MongoDB as a candidate for data storage (I’m not really doing anything relational), but if anyone has better ideas, please let me know.

By

The Outhouse Anti Pattern

Source Control Smells?

I was reading through some code the other day, and I came across a class that made me shutter a little. Interestingly, it wasn’t a code smell or a name smell. I wasn’t looking at the class itself, and it had an innocent enough name. It was a source control smell.

I was looking through source control for something unrelated, and for some reason this class caught my eye. I took a look at its version history and saw that it had been introduced and quickly changed dozens of times by several people. Even assuming that you like to check code in frequently, it’ll take you what, one or two checkins to write a class, and then maybe a few more to tweak its interface or optimize its internals, if need be? Why were there all these cooks in this kitchen, and what were they doing in there? The changes to the class continued like that over the course of time, with periods of few changes, and then bouts of frenzied, distributed activity. How strange.

I opened the class itself to take a look, and the source control checkin pattern immediately made sense to me. The class was a giant static class that did nothing but instantiate ICommand implementations and expose them as public fields. I haven’t tagged this class with C# for a reason — this anti-pattern transcends languages — so, I’ll briefly explain for non C# savvy readers. ICommand is an interface that defines Execute and CanExecute methods – sort of like a command pattern command, but without forcing implementation of UnExecute() or some such thing.

So, what was going on here was that this class was an already-large and robustly growing dumping repository for user actions in the application. Each property in the class was an instance of an ICommand implementation that allowed specifying delegates for the Execute and CanExecute methods, so they looked something like this:


public class ApplicationCommands
{
  public static readonly ICommand FooComand = new Command(Execute, CanExecute); //Command accepts delegate xtor params

  public static void Execute()
  {
   //Do some foo thing, perhaps with shared state (read: global variables) in the class
  }

  public static bool CanExecute()
  {
    //return a boolean based on some shared state (read: global variables) in the class
  }

 //Rinse, repeat, dozens and dozens, and probably eventually hundreds and thousands of times.
}

So, the ‘pattern’ here is that if you want to define some new user action, you open this class and find whatever shared global state will inform your command, and then add a new command instance field here.

A Suitable Metaphor

I try to stay away from the vulgar in my posting, so I beg forgiveness in advance if I offend, but this reminds me most of an outhouse at a campground. It’s a very public place that a lot of people are forced to stop by and… do their business. The only real reason to use it is that a some people would probably be upset and offended if they caught you not using it (e.g. whoever wrote it), as there is no immediately obvious benefit provided by the outhouse beyond generally approved decorum. Like the outhouse, over time it tends to smell worse and worse and degenerate noticeably. If you’re at all sensitive to smells, you probably just go in, hold your nose, and get out as quickly as possible. And, like the high demand outhouse, there are definitely “merge conflicts” as people are not only forced to hold their noses, but to wait in line for the ‘opportunity’ to do so.

There are some periodic attempts to keep the outhouse clean, but this is very unpleasant for whoever is tasked with doing so. And, in spite of the good intentions of this person, the smell doesn’t exactly go away. Basic etiquette emerges in the context of futile attempts to mitigate the unpleasantness, such as signs encouraging people to close the lid or comments in the code asking people to alphabetize, but people are generally in such a hurry to escape that these go largely unheeded.

At the end of the day, the only thing that’s going to stop the area from being a degenerative cesspool is the removal of the outhouse altogether.

Jeez, What’s So Bad About This

The actual equivalence of my metaphor is exaggerating my tone a bit, but I certainly think of this as an anti-pattern. Here’s why.

  1. What is this class’s single responsibility? The Single Responsibility Principle (SRP) wisely advises that a class should have exactly one reason to change. This class has as many reasons as it has commands. And, you don’t get to cop out by claiming that its single responsibility is to define and store information about everything a user might ever want to do (see God Class).
  2. This class is designed to be modified each and every time the GUI changes. That is, this thing will certainly be touched with every release, and it’ll probably be touched with every patch. The pattern here is “new functionality — remember to modify ApplicationCommands.cs”. Not only is that a gratuitous violation of (even complete reversal of) the Open/Closed Principle, but if you add a few more outhouses to the code base, people won’t even be able to keep track of all the changes that are needed to alter some small functionality of the software.
  3. Dependency inversion is clearly discouraged. While someone could always access the static property and inject it into another class, the entire purpose of making them public fields in a static class is to encourage inline use of them as global ‘constants’, from anywhere. Client code is not marshaling all of these at startup and passing them around, but rather simply using them in ad-hoc fashion wherever they are needed.
  4. Besides violating S, O, and D of SOLID, what if I want a different command depending on who is logged in? For example, let’s say that there is some Exit command that exits the application. Perhaps I want Exit command to exit if read-only user is logged in, but prompt with “are you sure” for read-write user. Because of the static nature of the property accessors here, I have to define two separate commands and switch over them depending on which user is logged in. Imagine how nasty this gets if there are many kinds of users with the need for many different commands. The rate of bloat of this class just skyrocketed from linear with functionality to proportional to (F*R*N) where F is functionality items, R is number of roles and N is nuance of behavior in roles.
  5. And, the rate of bloat of the client classes just went up a lot too, as they’ll need to figure out who is logged in and thus which of the commands to request.

  6. And, of course, there is the logistical problem that I noticed in the first place. This class is designed to force multiple developers to change it, often simultaneously. That’s a lot like designing a traffic light that’s sometimes green for everyone.

Alternative Design

So, how to work toward a more SOLID, flexible design? For a first step, I would make this an instance class, and hide the delegate methods. This would force clients to be decoupled from the definition, except in one main place. The second step would be to factor the commands with the more complex delegates into their own implementations of ICommand and thus their own classes. For simpler classes, these could simply be instantiated inline with the delegate methods collapsed to lambda expressions. Once everything had been factored into a dedicated class or flagged as declarable inline, the logic for generating these things could be moved into some sort of instance factory class that would create the various kinds of command. Now, we’re getting somewhere. The command classes have one reason to change (the command changes) and the factory has one reason to change (the logic for mapping requests to command instances changes). So, we’re looking better on SRP. The Open/Closed principle is looking more attainable too since the only class that won’t be closed for modification is the factory class itself as new functionality is added. Dependency inversion is looking up too, since we’re now invoking a factory that provides ICommand instances, so clients are not going out and searching for specific concrete instances on their own.

The one wrinkle is the shared global state from the previous commands static class. This would have to be passed into the factory and injected into the commands, as needed. If that starts to look daunting, don’t be discouraged. Sure, passing 20 or 30 classes into your factory looks really ugly, but the ugliness was there all along — you were just hiding it in the outhouse. Now that the outhouse is gone, you’ve got a bit of work to do shoveling the… you get the idea. Point is, at least you’re aware of these dependencies now and can create a strategy for managing them and getting them into your commands.

If necessary, this can be taken even further. To satisfy complaint (4), if that is really an issue, the abstract factory can be used. Alternatively, one could use reflection to create a store of all instances of ICommand in the application, keyed by their names. This is a convention over configuration solution that would allow clients to access commands by knowing only their names at runtime, and not anything about them beyond their interface definition. In this scheme, Open/Closed is followed religiously as no factory exists to modify with each new command. One can add functionality by changing a line of markup in the GUI and adding a new class. (I would advise against this strategy for the role/nuance based command scheme). Another alternative would be to read the commands in from meta-data, depending on the simplicity or complexity of the command semantics and how easily stored as data they could be. This is, perhaps the most flexible solution of all, though it may be too flexible and problematic if the commands are somewhat involved.

These are just ideas for directions of solution. There is no magic bullet. I’d say the important thing to do, in general, is recognize when you find yourself building an outhouse, and be aware of what problems it’s going to cause you. From there, my suggestions are all well and good in the abstract, but it will be up to you to determine how to proceed in transitioning to a better design.

By

Adding a Google Map to Android Application

I’m documenting here my first successful crack at adding a google map to an android application. Below are the steps I followed, with prerequisite steps of having the Android SDK and Eclipse installed, and being all set up for Android development (documented here).

  1. Created a new Android project
  2. Navigated to Help->Install New Software in Eclispe.
  3. Added a new source with URL http://dl.google.com/eclipse/plugin/3.7
  4. Selected all possible options (why not, right?) to have access to the google API in general.
  5. This install took about 5 minutes, all told, and I had to restart Eclipse
  6. In my new project, I navigated to project properties and went to Android, where I selected (the newly available) “Google APIs”
  7. From here, I got excited and ran my project, and was treated to a heaping helping of fail in the form of an error message saying that I needed API version 9 and my device was running API version 8 (Android 2.2.1). I fixed this by opening the manifest and changing the uses-sdk tag’s android:minSdkVersion to 8. Then when I ran, I had hello world on my phone. (Later, when all was working, I just deleted this line altogether, which eliminated an annoying build warning)
  8. With that in place, I added the node
    <uses-library android:name="com.google.android.maps" />

    to my application node in the same manifest XML file.

  9. Then, as I discovered in an earlier post, I had to add
    <uses-permission android:name="android.permission.INTERNET" />
  10. From there, I followed the steps in this tutorial, starting at number 4.
  11. One thing to watch for in the tutorial is that you should add the lines about the map view after all boilerplate code already in onCreate or mapView will be set null and you’ll dereference it.

With all of this, if you ignore the bit about the API key, what you’ll wind up with is a map with no overlay on your phone. That is, it looks like everything is normal, including zoom icons, but the map hasn’t loaded yet. This was the state I found myself in when I decided that I’d take the last step and get the API key. This was not at all trivial.

I understand that if you’re familiar with the concept of signing software for distribution in app market, this probably makes sense. But, if this isn’t something you’ve been doing, it comes straight out of left field and, what’s more, there was no real place where how to do this was described in any detail. So, I’ll offer that here.

  1. Navigate to your JAVA_HOME bin folder (wherever your java and javac executables are)
  2. Run the command
    keytool -v -list {keystore}

    . The -v is important because this gives yo all the fingerprints (the default is SHA1, which won’t help you in the next step — you want MD5). The keystore is going to be debug.keystore, which is what your device uses for signing when you’re developing and not releasing. For me, this was located in C:\documents and settings\erik\.android on this win XP machine (YMMV with the directory)

  3. What you’ve done here is generated a fingerprint for the developer debug keystore that Eclipse automatically uses. This is fine until you want to deploy the app to an app store, at which time you’ll have to jump through a few more hoops.
  4. Take the key that this spits out and copy it (right click and select “mark” in cmd window), and paste it into the “my certificate’s MD5 fingerprint” text box here: http://code.google.com/android/maps-api-signup.html
  5. This will give you both your fingerprint, and an example layout XML to use in your Android map project
  6. Copy this into your project’s layout, following the guide for naming the attribute that contains the key. (That is, find your layout’s com.google.android.maps.MapView node and set its android:apiKey attribute to the same as the one on the page you’re looking at.
  7. Once you’ve got this, you can paste it into your map layout, run your app (phone or emulator) and get ready to start navigating to your heart’s content

After I went through all this, I found this clearly superior tutorial: http://mobiforge.com/developing/story/using-google-maps-android. This is really great for getting started, complete with code samples and starting as simple as possible.

By

Redirect Back with Spring 3.0 MVC

As I’m getting back into Java and particularly Spring MVC, it’s a bit of an adjustment period, as it always is when you’re dusting off old skills or trying something new. Things that should be easy are maddeningly beyond your reach. In light of that, I’m going to document a series of small milestones as I happen on them, in the hopes that someone else in my position or a complete newbie might find it useful.

So, today, I’m going to talk about processing a GET request without leaving the page. The reason I wanted to do this is that I have a page representing my house’s kitchen. The page has two buttons (really links styled as buttons through CSS) representing lights on and off. I’m providing a restful URL scheme to allow them to be turned on and off: kitchen/on and kitchen/off will turn the lights on and off, respectively. However, when this happens, I don’t have some kitchen/off.jsp page that I want handling this. I want them redirected right back to the kitchen page for further manipulation, if need be.

Here is how this was accomplished. Pay special attention to the kitchen() method taking the variable name and request as paramters:

@Controller
@RequestMapping("/kitchen")
public class KitchenController {

	@RequestMapping("/kitchen")
    public ModelAndView kitchen() {
    	String message = "Welcome to Daedalus";
        return new ModelAndView("kitchen", "message", message);
    }
	
	/*
	 * This takes kitchen and some request after it and satisfies the request
	 */
	@RequestMapping(value="/{name}", method = RequestMethod.GET)
	public String kitchen(@PathVariable String name, HttpServletRequest request) {
		try {
			Runtime.getRuntime().exec("/usr/local/heyu-2.6.0/heyu " + name + " A2");
		} catch (IOException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
		return "redirect:" + request.getHeader("Referer");
	}
}

So, the idea here is that I’m returning a redirect to the referrer. So, basic logic flow is that client sends an HTTP get request by clicking on the link. We process the request, take appropriate action, and then redirect back to where he started.

This certainly may not be the best way to accomplish what I’m doing, but it’s a way. And, my general operating philosophy is to get things working immediately, and then refactor and/or optimize from there. So, if you know of a better way to do it, by all means, comment. Otherwise, I’ll probably figure one out on my own sooner or later.

By

Adding a Switch to Your Home Automation

Last time in this series, I covered turning a lamp on and off with a keychain. Today, I’m going to up the ante and describe how to control an overhead light with your same keychain. This will be accomplished by replacing the switch for the light that you want to control with a new switch that you’re going order.

The benefits include not only the ability to remotely control the light itself, but also adding dimming capabilities to previously non-dimmable lights. In addition, their “soft” on and off behavior dramatically increases the lifespan of incandescent bulbs.

Prerequisites

A few things to check off your list before getting started:

  1. The light to be controlled must be an incandescent light with wattage of at least 40. Do not use on fluorescents, halogens, etc!.
  2. You must have the transceiver from the last post or this will not work.
  3. Needle nose pliers, screw driver(s), wire nuts, wire strippers/cutters, voltage tester.

Also, and perhaps most importantly, you’ll need some degree of comfort changing out a wall socket. I assume absolutely no responsibility for what you do here. Working with home electricity can be very dangerous, and if you are the least bit uncomfortable doing this, I recommend hiring an electrician to wire up the outlet.

Making the purchase

The first thing that you’re going to need to do is order your part. You’re going to want to order the X10 Wall Switch Module (WS467). The price on this varies, but you can probably get it for anywhere from $12 to $20. It is pictured here:

I recommend this one because it will fit into the same space as most existing standard toggle switches. If you are looking to replace a rocker switch, you can order WS12A, which tends to be a bit more pricey, but such is the cost of elegance.

Installation

The unit itself will come with instructions, but here is my description from experience:

  1. First, remove the wall plate from the box just to peer in and make sure there isn’t some kind of crazy wiring scenario going on that may make you want to abort mission and/or call in an electrician. This might include lots of different wires using this switch as a stop along the way to other boxes or outlets in the house, for instance. Careful while you do this as your electricity is still live.
  2. Once you’re satisfied that the mission looks feasible, head to your circuit breaker and kill the power. Make sure power is off by using your voltage tester with one lead on the hot wire and the other on ground somewhere (if you didn’t figure out which was hot and which was ground, try both leads on the switch).
  3. Once you’re doubly sure power is no longer flowing, pop the screws on the switch itself that are holding it to the junction box and remove the switch.
  4. Your old switch should have two connected leads, and each of these should have one or more wires connected to each lead. At this point, you can basically just swap the old wireup for the new module’s wireup — the colors don’t matter (though I’ve adopted a convention with these guys of wiring black to hot). You’re going to need your needle nose pliers and wire nuts because unlike most existing outlets, the X10 have wires instead of terminal receptacles.
  5. Once you’ve twisted up the wires and made sure no bare wires are showing anywhere, settle the switch into the junction box and screw it back in. You’re now going to restore power at the breaker box.
  6. When power is back on, verify that pushing the button on the switch turns lights on and pushing it again turns them off. This switch will always be manually operable. Also, don’t be surprised that the lights come on more gradually than you’re used to (it may seem weird at first, but I love this, personally).
  7. Once everything is working properly, you can replace the wall plate. However, in continuation from our first post, what you’re going to want to do now is match the dials on the switch to one of the remote settings, in terms of house code and unit code, before you replace the wall plate. You can always change this later, but you might as well do it now while it’s already off

Operation

Now, you’ve got it installed, and it should work manually or from your keychain remote. Generally speaking, wall switches are always operational both manually and remotely. These are not rocker/toggle switches but pushbutton switches, so they have no concept of state. Each time you press the button either on the switch or from the remote, the command boils down to “do the opposite of what you’re currently doing.” That is, there’s no concept of manual or remote push overriding the other — it’s just flipping the light from whatever state it was last in.

In addition to this functionality, you also get dimming functionality. If you hold the button in instead of simply pushing it, the light’s brightness will change. Holding it in when the light is off causes it to ramp up from dim to bright, and the opposite is true when the light is on – it will ramp down. On top of this, the light will ‘remember’ its last brightness setting. So, if I turn the light on by gradually bringing it up from dark and then turn it off, the next time I turn it on, it will ‘remember’ its previous brightness.

One final note is that there is a manual slide below the main switch. This is, in effect, a kill switch. If you push that, you will completely break the circuit and no amount of remote presses or X10 signals can activate it in this state. This is the equivalent of unplugging your transceiver module from the last post. If power is cut, the signals you send don’t matter.

Recap

So, now you’ve got a remote control capable of controlling two lights in the house. One is a lamp and the other an overhead light that can now be dimmed and brightened manually, as well as turned on and off remotely. (Next time I install a switch, I will take some pictures of what I’m doing and add them to the instructions above).

By

Composite

Quick Information/Overview

Pattern Type Structural
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Easy

Up Front Definitions

  1. Component: This is the abstract object that represents any and all members of the pattern
  2. Composite: Derives from component and provides the definition for the class that contains other components
  3. Leaf: A concrete node that encapsulates data/behavior and is not a container for
    other components.
  4. Tree: For our context here, tree describes the structure of all of the components.

The Problem

Let’s say that you get a call in to write a little console application that allows users to make directories, create empty files, and delete the same. The idea is that you can create the whole structure in memory and then dump it to disk in one fell swoop. Marketing thinks this will be a hot seller because it will allow users to avoid the annoyance of disk I/O overhead when planning out a folder structure (you might want to dust off your resume, by the way).

Simple enough, you say. For the first project spring, you’re not going to worry about sub-directories – you’re just going to create something that will handle files and some parent directory.

So, you create the following class (you probably create it with actual input validation, but I’m trying to stay focused on the design pattern):

public class DirectoryStructure
{
    private const string DefaultDirectory = ".";

    private readonly List<string> _filenames = new List<string>();

    private string _directory = DefaultDirectory;

    public string RootDirectory { get { return _directory; } set { _directory = string.IsNullOrEmpty(value) ? DefaultDirectory : value; } }

    public void AddFile(string filename)
    {
        _filenames.Add(filename);
        _filenames.Sort();
    }

    public void DeleteFile(string filename)
    {
        if (_filenames.Contains(filename))
        {
            _filenames.Remove(filename);
        }
    }

    public void PrintStructure()
    {
        Console.WriteLine("Starting in " + RootDirectory);
        _filenames.ForEach(filename => Console.WriteLine(filename));
    }

    public void CreateOnDisk()
    {
        if (!Directory.Exists(RootDirectory))
        {
            Directory.CreateDirectory(RootDirectory);
        }

        _filenames.ForEach(filename => File.Create(filename));
    }
}

Client code is as follows:

static void Main(string[] args)
{
    var myStructure = new DirectoryStructure();

    myStructure.AddFile("file1.txt");
    myStructure.AddFile("file2.txt");
    myStructure.AddFile("file3.txt");
    myStructure.DeleteFile("file2.txt");

    myStructure.PrintStructure();

    Console.Read();

    myStructure.CreateOnDisk();
}

That’s all well and good, so you ship and start work on sprint number 2, which includes the story that users want to be able to create one level of sub-directories. You do a bit of refactoring, deciding to make root a constructor parameter instead of a settable property, and then get to work on this new story.

You wind up with the following:

public class DirectoryStructure
{
    private const string DefaultDirectory = ".";

    private readonly List<string> _filenames = new List<string>();
    private readonly List<string> _directories = new List<string>();
    private readonly Dictionary<string, List<string>> _subDirectoryFilenames = new Dictionary<string, List<string>>();

    private readonly string _root;

    public DirectoryStructure(string root = null)
    {
        _root = string.IsNullOrEmpty(root) ? DefaultDirectory : root;
    }


    public void AddFile(string filename)
    {
        _filenames.Add(filename);
        _filenames.Sort();
    }

    public void AddFile(string subDirectory, string filename)
    {
        if (!_directories.Contains(subDirectory))
        {
            AddDirectory(subDirectory);
        }
        _subDirectoryFilenames[subDirectory].Add(filename);
    }

    public void AddDirectory(string directoryName)
    {
        _directories.Add(directoryName);
        _subDirectoryFilenames[directoryName] = new List<string>();
    }

    public void DeleteDirectory(string directoryName)
    {
        if (_directories.Contains(directoryName))
        {
            _directories.Remove(directoryName);
            _subDirectoryFilenames[directoryName] = null;
        }
    }

    public void DeleteFile(string filename)
    {
        if (_filenames.Contains(filename))
        {
            _filenames.Remove(filename);
        }
    }

    public void DeleteFile(string directoryName, string filename)
    {
        if (_directories.Contains(directoryName) && _subDirectoryFilenames[directoryName].Contains(filename))
        {
            _subDirectoryFilenames[directoryName].Remove(filename);
        }
    }

    public void PrintStructure()
    {
        Console.WriteLine("Starting in " + _root);
        foreach (var myDir in _directories)
        {
            Console.WriteLine(myDir);
            _subDirectoryFilenames[myDir].ForEach(filename => Console.WriteLine("\t" + filename));
        }
        _filenames.ForEach(filename => Console.WriteLine(filename));
    }

    public void CreateOnDisk()
    {
        if (!Directory.Exists(_root))
        {
            Directory.CreateDirectory(_root);
        }

        foreach (var myDir in _directories)
        {
            Directory.CreateDirectory(Path.Combine(_root, myDir));
            _subDirectoryFilenames[myDir].ForEach(filename => File.Create(Path.Combine(myDir, filename)));
        }
        _filenames.ForEach(filename => File.Create(filename));
    }
}

and client code:

static void Main(string[] args)
{
    var myStructure = new DirectoryStructure();

    myStructure.AddFile("file1.txt");
    myStructure.AddDirectory("firstdir");
    myStructure.AddFile("firstdir", "hoowa");
    myStructure.AddDirectory("seconddir");
    myStructure.AddFile("seconddir", "hoowa");
    myStructure.DeleteDirectory("seconddir");
    myStructure.AddFile("file2.txt");
    myStructure.AddFile("file3.txt");
    myStructure.DeleteFile("file2.txt");

    myStructure.PrintStructure();

    Console.Read();

    myStructure.CreateOnDisk();
}

Yikes. That’s starting to smell. You’ve had to add overloads for adding file and deleting file, add methods for add/delete directory, and append logic to print and create. Basically, you’ve had to either touch or overload every method in the class. Generally, that’s a surefire sign that you’re doin’ it wrong. But, no time for that now because here comes the third sprint. This time, the business wants two levels of nesting. So, you get started and you see just how ugly things are going to get. I won’t provide all of your code here so that the blog can keep a PG rating, but here’s the first awful thing that you had to do:

private readonly List<string> _filenames = new List<string>();
private readonly List<string> _directories = new List<string>();
private readonly Dictionary<string, List<string>> _subDirectoryFilenames = new Dictionary<string, List<string>>();
private readonly Dictionary<string, List<string>> _subDirectoryNames;
private readonly Dictionary<string, Dictionary<string, List<string>>> _subSubDirectoryFilenames = new Dictionary<string, Dictionary<string, List<string>>>();

You also had to modify or overload every method yet again, bringing the method total to 12 and the complexity of each method to a larger figure. You’re pretty sure you can’t keep this up for an arbitrary number of sprints, so you send out your now-dusted off resume and foist this stinker on the hapless person replacing you.

So, What to Do?

What went wrong here is relatively easy to trace. Let’s backtrack to the start of the second sprint when we needed to support sub-directories. As we’ve seen in some previous posts in this series, the first foray at implementing the second sprint gives off the code smell, primitive obsession. This is a code smell wherein bunches of primitive types (string, int, etc) are used in ad-hoc fashion to operate as some kind of ad-hoc class.

In this case, the smell is coming from the series of lists and dictionaries centering around strings to represent file and directory names. As the sprints go on, it’s time to recognize that there is a need for at least one class here, so let’s create it and call it “SpeculativeDirectory” (so as not to confuse it with the C# Directory class).

public class SpeculativeDirectory
{
    private const string DefaultDirectory = ".";

    private readonly HashSet<SpeculativeDirectory> _subDirectories = new HashSet<SpeculativeDirectory>();

    private readonly HashSet<string> _files = new HashSet<string>();

    private readonly string _name = string.Empty;
    public string Name { get { return _name; } }

    public SpeculativeDirectory(string name)
    {
        _name = string.IsNullOrEmpty(name) ? DefaultDirectory : name;
    }

    public SpeculativeDirectory GetDirectory(string directoryName)
    {
        return _subDirectories.FirstOrDefault(dir => dir.Name == directoryName);
    }

    public string GetFile(string filename)
    {
        return _files.FirstOrDefault(file => file == filename);
    }

    public void Add(string file)
    {
        if(!string.IsNullOrEmpty(file))
            _files.Add(file);
    }

    public void Add(SpeculativeDirectory directory)
    {
        if (directory != null && !string.IsNullOrEmpty(directory.Name))
        {
            _subDirectories.Add(directory);
        }
    }

    public void Delete(string file)
    {
        _files.Remove(file);
    }

    public void Delete(SpeculativeDirectory directory)
    {
        _subDirectories.Remove(directory);
    }

    public void PrintStructure(int depth)
    {
        string myTabs = new string(Enumerable.Repeat<char>('\t', depth).ToArray());
        Console.WriteLine(myTabs + Name);

        foreach (var myDir in _subDirectories)
        {
            myDir.PrintStructure(depth + 1);
        }
        foreach (var myFile in _files)
        {
            Console.WriteLine(myTabs + "\t" + myFile);
        }
    }

    public void CreateOnDisk(string path)
    {
        string myPath = Path.Combine(path, Name);

        if (!Directory.Exists(myPath))
        {
            Directory.CreateDirectory(myPath);
        }

        _files.ToList().ForEach(file => File.Create(Path.Combine(myPath, file)));
        _subDirectories.ToList().ForEach(dir => dir.CreateOnDisk(myPath));
    }

}

And, the DirectoryStructure class is now:

public class DirectoryStructure
{
    private readonly SpeculativeDirectory _root;

    public DirectoryStructure(string root = null)
    {
        _root = new SpeculativeDirectory(root);
    }


    public void AddFile(string filename)
    {
        _root.Add(filename);
    }

    public void AddFile(string directoryName, string filename)
    {
        var myDirectory = _root.GetDirectory(directoryName);
        if (myDirectory != null)
        {
            myDirectory.Add(filename);
        }
    }

    public void AddDirectory(string directoryName)
    {
        _root.Add(new SpeculativeDirectory(directoryName));
    }

    public void DeleteFile(string filename)
    {
        _root.Delete(filename);
    }

    public void DeleteFile(string directoryName, string filename)
    {
        var myDirectory = _root.GetDirectory(directoryName);
        if (myDirectory != null)
        {
            myDirectory.Delete(filename);
        }
    }

    public void DeleteDirectory(string directoryName)
    {
        _root.Delete(directoryName);
    }

    public void PrintStructure()
    {
        _root.PrintStructure(0);
    }

    public void CreateOnDisk()
    {
        _root.CreateOnDisk(string.Empty);
    }
}

The main program that invokes this class is unchanged. So, now, notice the Structure of SpeculativeDirectory. It contains a collection of strings, representing files, and a collection of SpeculativeDirectory, representing sub-directories. For things like PrintStructure() and CreateOnDisk(), notice that we’re now taking advantage of recursion.

This is extremely important because what we’ve done here is future proofed for sprint 3 much better than before. It’s still going to be ugly and involve more and more overloads, but at least it won’t require defining increasingly nested (and insane) dictionaries in the DirectoryStructure class.

Speaking of DirectoryStructure, does this class serve a purpose anymore? Notice that the answer is “no, not really”. It basically defines a root directory and wraps its operations. So, let’s get rid of that before we do anything else.

To do that, we can just change the client code to the following and delete DirectoryStructure:

static void Main(string[] args)
{
    var myDirectory = new SpeculativeDirectory(".");
    myDirectory.Add("file1.txt");
    myDirectory.Add(new SpeculativeDirectory("firstdir"));
    myDirectory.GetDirectory("firstdir").Add("hoowa");
    var mySecondDir = new SpeculativeDirectory("seconddir");
    myDirectory.Add(mySecondDir);
    myDirectory.GetDirectory("seconddir").Add("hoowa");
    myDirectory.Delete(mySecondDir);
    myDirectory.Add("file2.txt");
    myDirectory.Add("file3.txt");
    myDirectory.Delete("file2.txt");

    myDirectory.PrintStructure(0);

    Console.Read();

    myDirectory.CreateOnDisk(".");

}

Now, we’re directly using the directory object and we’ve removed a class in addition to cleaning things up. The API still isn’t perfect, but we’re gaining some ground. So, let’s turn our attention now to cleaning up SpeculativeDirectory. Notice that we have a bunch of method pairs: GetDirectory/GetFile, Add(Directory)/Add(string), Delete(Directory)/Delete(string). This kind of duplication is a code smell — we’re begging for polymorphism here.

Notice that we are performing operations routinely on SpeculativeDirectory and performing the same operations on the string representing a file. It is worth noting that if we had a structure where file and directory inherited from a common base or implemented a common interface, we could perform operations on them just once. And, as it turns out, this is the crux of the command pattern.

Let’s see how that looks. First, we’ll define a SpeculativeFile object:

public class SpeculativeFile
{
    private readonly string _name;
    public string Name { get; }

    public SpeculativeFile(string name)
    {
        _name = name ?? string.Empty;
    }

    public void Print(int depth)
    {
        string myTabs = new string(Enumerable.Repeat<char>('\t', depth).ToArray());
        Console.WriteLine(myTabs + Name);
    }

    public void CreateOnDisk(string path)
    {
        File.Create(Path.Combine(path, _name));
    }
}

This is pretty simple and straightforward. The file class knows how to print itself and how to create itself on disk, and it knows that it has a name. Now our task is to have a common inheritance model for file and directory. We’ll go with an abstract base class since they are going to have common implementations and file won’t have an implementation, per se, for add and delete. Here is the common base:

public abstract class SpeculativeComponent
{
    private readonly string _name;
    public string Name { get { return _name; } }

    private readonly HashSet<SpeculativeComponent> _children = new HashSet<SpeculativeComponent>();
    protected HashSet<SpeculativeComponent> Children { get { return _children; } }

    public SpeculativeComponent(string name)
    {
        _name = name ?? string.Empty;
    }

    public virtual SpeculativeComponent GetChild(string name) { return null; }

    public virtual void Add(SpeculativeComponent component) { }

    public virtual void DeleteByName(string name) { }

    public void Print()
    {
        Print(0);
    }

    public void CreateOnDisk()
    {
        CreateOnDisk(Name);
    }

    protected virtual void Print(int depth)
    {
        string myTabs = new string(Enumerable.Repeat<char>('\t', depth).ToArray());
        Console.WriteLine(myTabs + Name);

        foreach (SpeculativeComponent myChild in _children)
        {
            myChild.Print(depth + 1);
        }
    }

    protected virtual void CreateOnDisk(string path)
    {
        foreach (var myChild in _children)
        {
            myChild.CreateOnDisk(Path.Combine(path, Name));
        }
    }
        
}

A few things to note here. Fist of all, our recursive Print() and CerateOnDisk() methods are divided into two methods each, one public, one private. This is continue to allow for recursive calls but without awkwardly forcing the user to pass in zero or empty or whatever for path/depth. Notice also that common concerns for the two different types of nodes (file and directory) are now here, some stubbed as do-nothing virtuals and others implemented. The reason for this is conformance to the pattern — while files and directories share some overlap, some operations are clearly not meaningful for both (particularly adding/deleting and anything else regarding children). So, you do tend to wind up with the leaves (SpeculativeFile) ignoring inherited functionality, this is generally a small price to pay for avoiding duplication and the ability to recurse to n levels.

With this base class, we have pulled a good bit of functionality out of the file class, which is now this:

public class SpeculativeFile : SpeculativeComponent
{
    public SpeculativeFile(string name) : base(name) {}

    protected override void CreateOnDisk(string path)
    {
        File.Create(Path.Combine(path, Name));
        base.CreateOnDisk(path);
    }
}

Pretty simple. With this new base class, here is new SpeculativeDirectory class:

public class SpeculativeDirectory : SpeculativeComponent
{
    public SpeculativeDirectory(string name) : base(name) { }

    public override SpeculativeComponent GetChild(string name)
    {
        return Children.FirstOrDefault(child => child.Name == name);
    }

    public override void Add(SpeculativeComponent child)
    {
        if(child != null)
            Children.Add(child);
    }

    public override void DeleteByName(string name)
    {
        var myMatchingChild = Children.FirstOrDefault(child => child.Name == name);
        if (myMatchingChild != null)
        {
            Children.Remove(myMatchingChild);
        }
    }

    protected override void CreateOnDisk(string path)
    {
        string myPath = Path.Combine(path, Name);
        if (!Directory.Exists(myPath))
        {
            Directory.CreateDirectory(myPath);
        }

        base.CreateOnDisk(path);
    }
}

Wow. A lot more focused and easy to reason about, huh? And, finally, here is the new API:

static void Main(string[] args)
{
    var myDirectory = new SpeculativeDirectory(".");
    myDirectory.Add(new SpeculativeFile("file1.txt"));
    myDirectory.Add(new SpeculativeDirectory("firstdir"));
    myDirectory.GetChild("firstdir").Add(new SpeculativeFile("hoowa"));
    myDirectory.Add(new SpeculativeDirectory("seconddir"));
    myDirectory.GetChild("seconddir").Add(new SpeculativeFile("hoowa"));
    myDirectory.DeleteByName("seconddir");
    myDirectory.Add(new SpeculativeFile("file2.txt"));
    myDirectory.Add(new SpeculativeFile("file3.txt"));
    myDirectory.DeleteByName("file2.txt");

    myDirectory.Print();

    Console.Read();

    myDirectory.CreateOnDisk();

}

Even the API has improved since our start. We’re no longer creating this unnatural “structure” object. Now, we just create root directory and add things to it with simple API calls in kind of a fluent interface.

Now, bear in mind that this is not as robust as it could be, but that’s what you’ll do in sprint 3, since your sprint 2 implemented sub-directories for N depths of recursion and not just one. :)

A More Official Explanation

According to dofactory, the Composite Pattern’s purpose is to:

Compose objects into tree structures to represent part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly.

What we’ve accomplished in rescuing the speculative directory creation app is to allow the main program to perform operations on nodes in a directory tree without caring whether they are files or directories (with the exception of actually creating them). This is most evident in the printing and writing to disk. Whether we created a single file or an entire directory hierarchy, we would treat it the same for creating on disk and for printing.

The elegant concept here is that we can build arbitrarily large structures with arbitrary conceptual tree structures and do things to them uniformly. This is important because it allows the encapsulation of tree node behaviors within the objects themselves. There is no master object like the original DirectoryStructure that has to walk the tree, deciding how to treat each element. Any given node in the tree knows how to treat itself and its sub-elements.

Other Quick Examples

Other places where one might use the composite pattern include:

  1. GUI composition where GUI widgets can be actual widgets or widget containers (Swing java, WPF XAML, etc).
  2. Complex Chain of Responsibility structures where some nodes handle events and others simply figure out who to hand them over to
  3. A menu structure where nodes can either be actions or sub-menus.

A Good Fit – When to Use

The pattern is useful when (1) you want to represent an object hierarchy and (2) there is some context in which you want to be able to ignore differences between the objects and treat them uniformly as a client. This is true in our example here in the context of printing the structure and writing it to disk. The client doesn’t care whether something is a file or a directory – he just wants to be able to iterate over the whole tree performing some operation.

Generally speaking, this is good to use anytime you find yourself looking at a conceptual tree-like structure and iterating over the whole thing in the context of a control flow statement (if or case). In our example, this was achieved indirectly by different method overloads, but the result in the end would have been the same if we had been looking at a single method with a bit of logic saying “if node is a file, do x, otherwise, do y”.

Square Peg, Round Hole – When Not to Use

There are some subtle considerations here. You don’t want to use this if all of your nodes can be the same type of object, such as the case of some simple tree structure. If you were, for example, creating a sorted tree for fast lookup, where each node had some children and a payload, this would not be an appropriate use of composite.

Another time you wouldn’t want to use this is if a tree structure is not appropriate for representing what you want to do. If our example had not had any concept of recursion and were only for representing a single directory, this would not have been appropriate.

So What? Why is this Better?

The code cleanup here speaks for itself. We were able to eliminate a bunch of method overloads (or conditional branching if we had gone that way), making the code more maintainable. In addition, it allows elimination of a structure that rots as it grows — right out of the box, the composite pattern with its tree structure allows handling of arbitrarily deep and wide tree structures. And, finally, it allows clients to walk the tree structure without concerning themselves with what kind of nodes its processing and how to navigate to the next nodes.

By

Fixing the Crackle and Pop in Ubuntu Sound

So, as I blogged previously, I’ve been re-appropriating some old PCs and sticking Ubuntu 11.10 on them. I had one doing exactly what I wanted it to do in the context of my home automation network with one small but annoying exception. Whenever I would play music through headphones or speakers, there would be a crackling noise. A google search turned up a lot of stuff that I had to weed through, but ultimately, this is what did the trick:

I opened /etc/pulse/default.pa and found the line

load-module module-udev-detect

This was located in an “ifexists” clause. It should be replaced by this:

load-module module-udev-detect tsched=0

From there, reboot. I can’t speak to whether this will fix everyone’s crackle or just some people’s, and I can’t speak to why this setting isn’t enabled by default, but c’est la vie. This is what did the trick for me, and hopefully it fixes the problem for someone else as well.

By

Writing Maintainable Code Demands Creativity

Writing maintainable code is for “Code Monkeys”?

This morning, I read an interesting blog post from Erik McClure. The premise of the post is that writing maintainable code is sufficiently boring and frustrating as to discourage people from the programming profession. A few excerpts from the post are:

There is an endless stream of rheteric discussing how dangerous writing clever code is, and how good programmers don’t write clever code, or fast code, but maintainable code, that can’t have clever for loop statements or fancy tricks. This is wrong – good codemonkeys write maintainable code.

and

What I noticed was that it only became intolerable when I was writing horrendously boring, maintainable code (like unit tests). When I was programming things I wasn’t supposed to be programming, and solving problems in ways I’m not supposed to, my creativity had found its outlet. Programming can not only be fun, but real programming is, itself, an art, a solution to a problem that itself embodies a certain elegance. Society in general seems to be forgetting what makes us human in the wake of a digital revolution that automatizes menial tasks. Your creativity is the most valuable thing you have.

When you write a function that does things the right way, when you refactor a clever subroutine to something that conforms to standards, you are tearing the soul out of your code. Bit by bit, you forget who you are and what the code means. The standards may enforce maintainable and well-behaved code, but it does so at the cost of your individuality. Coding becomes middle-school math, where you have to solve the same, reworded problem a hundred times to get a good grade so you can go do something else that’s actually useful. It becomes a means to an end, not an adventure in and of itself.

Conformity for the Sake of Maintainability

There seem to be a few different themes here. The first one I see is one with which I have struggled myself in the past: chafing at being forced to change the way you do things to conform to a group convention. I touched on that here and here. The reason that I mention this is the references to “[conforming] to standards” and the apparent justification of those standards being that they make code “maintainable”. The realpolitik of this situation is such that it doesn’t really matter what justification is cited (appeal to maintainability, appeal to convention, appeal to anonymous authority, appeal to named authority, appeal to threat, etc). In the end, it boils down to “because I said so”. I mention this only insofar as I will dismiss this theme as not having much to do with maintainability itself. Whatever Erik was forced to do may or may not have actually had any bearing whatsoever on the maintainability of the code (i.e. “maintainability” could have been code for “I just don’t like what you did, but I should have an official sounding reason”).

Maintainable Code is Boring Code

So, on to the second theme, which is that writing maintainable code is boring. In particular Erik mentions unit tests, but I’d hazard a guess that he might also be talking about small methods, SRP classes, and other clean coding principles. And, I actually agree with him to an extent. Writing code like that is uneventful in some ways that people might not be used to.

That is, say that you don’t perform unit tests, and you write large, coupled, complex classes and methods. When you fire up your application for the first time after a few hours of coding, that’s pretty exciting. You have no idea what it’s going to do, though the smart money is on “crash badly”. But, if it doesn’t, and it runs, that’s a heady feeling and a rush — like winning $100 in the lottery. The work is also interesting because you’re going to be spending lots of time in the debugger, writing bunches of local variables down on paper to keep them straight. Keeping track of all of the strands of your code requires full concentration, and there’s a feeling of incredible accomplishment when you finally track down that needle in-a-haystack bug after an all-nighter.

On the flip side, someone who writes a lot of tests and conforms to the clean code/craftsmanship mantra has a less exciting life. If you truly practice TDD, the first time you fire up the application, you already know it’s going to work. The lottery-game excitement of longshot odds with high payoff is replaced by a dependable salary. And, as for the maddening all-nighter bugs, those too are gone. You can pretty much reproduce a problem immediately, and solve it just as quickly with an additional failing test that you make pass. The underdog, down by a lot all game, followed by miracle comeback is replaced by a game where you’re winning handily from wire to wire. All of the roller-coaster highs and lows with their panicked all nighters and miracle finishes are replaced by you coming in at 9, leaving at 5, and shipping software on time or ahead of schedule.

Making Code Maintainable Is Brainless

The third main theme that I saw was the idea that writing clever code and maintainable code is mutually exclusive, and that the latter is brainless. Presumably, this is colored to some degree by the first theme, but on its own, the implication is that maintainable code is maintainable because it is obtuse and insufficient to solve the problem. That is, instead of actually solving problems that they’re tasked with, the maintainable-focused drones oversimplify the problem and settle for meeting most, if not all of the requirements of the software.

I say this because of Erik’s vehement disagreement with the adage that roughly says “clever code is bad code”. I’ve seen this pithy expression explained in more detail by people like Uncle Bob (Robert Martin) and I know that it requires further explanation because it actually sounds discouraging and draconian stated simply. (Though, I think this is the intended, provocative effect to make the reader/listener demand an explanation). But, taken at face value I would agree with Erik. I don’t relish the thought of being paid a wage to keep quiet and do stupid things.

Maintainability Reconsidered

Let’s pull back for a moment and consider the nature of software and software development. In his post, Erik bleakly points out that software “automatizes[sic] menial tasks”. I agree with his take, but with a much more optimistic spin — I’m in the business of freeing people from drudgery. But, either way, there can be little debate that the vast majority of software development is about automating tasks — even game development, which could be said to automate pretend playground games, philosophically (kids don’t need to play “cops and robbers” on the playground using sticks and other makeshift toys when games about detectives and mobsters automate this for them).

And, as we automate tasks, what we’re doing is taking tasks that have some degree of intrinsic complexity and falling on the grenade of that complexity so that completing the task is simple, intuitive, and even pleasant (enter user experience and graphic design) for prospective users. So, as developers, we deal with complexity so that end users don’t have to. We take complicated processes and model them, and simplify them without oversimplifying them. This is at the heart of software development and it’s a task so complex that all manner of methodologies, philosophies, technologies, and frameworks have been invented in an attempt to get it right. We make the complex simple for our non-technical end users.

Back in the days when software solved simpler problems than it does now, things were pretty straightforward. There were developers and there were users. Users didn’t care what the code looked like internally, and developers generally operated alone or perhaps in small teams for any given task. In this day and age, end-users still don’t care what the code looks like, but development teams are large, sometimes enormous, and often distributed geographically, temporally, and according to specialty. You no longer have a couple of people that bang out all necessary code for a project. You have library writers, database people, application coders, graphic designers, maintenance programmers etc.

With this complexity, an interesting paradigm has emerged. End-users are further removed, and you have other, technical users as well. If you’re writing a library or an IDE plugin, your users are other programmers. If you’re writing any code, the maintenance programmer that will come along later is one of your users. If you’re an application developer, a graphic designer is one of your users. Sure, there are still end-users, but there are more stakeholders now to consider than there used to be.

In light of this development, writing code that is hard to maintain and declaring that this is just how you do things is a lot like writing a piece of code with an awful user interface and saying to your users “what do you want from me — it works, doesn’t it?” You’re correct, and you’re destined to go out of business. If I have a programmer on my team who consistently and proudly writes code that only he understands and only he can decipher, I’m hoping that he’s on another team as soon as possible. Because the fact of the matter is that anybody can write code that meets the requirements, but only a creative, intelligent person can do it in such a way that it’s quickly understood by others without compromising the correctness and performance of the solution.

Creativity, Cleverness and Maintainability

Let’s say that I’m working and I find code that sorts elements in a list using bubble sort, which is conceptually quite simple. I decide that I want to optimize, so I implement quick sort, which is more complex. One might argue that I’m being much more clever, because quick sort is a more elegant solution. But, quicksort is harder for a maintenance programmer to grasp. So, is the solution to leave bubble sort in place for maintainability? Clearly not, and if someone told Erik to do that, I understand and empathize with his bleak outlook. But then, the solution also isn’t just to slap quicksort in and call it a day either. The solution is to take the initial implementation and break it out into methods that wrap the various control structures and have descriptive names. The solution is to eliminate one method with a bunch of incrementers and decrementers in favor of several with more clearly defined scope and purpose. The solution is, in essence, to teach the maintenance programmer quicksort by making your quicksort code so obvious and so readable that even the daft could grasp it.

That is not easy to do. It requires creativity. It requires knowing the subject matter not just well enough to get it right through trial and error, not just well enough to know it cold and not just well enough to explain it to the brightest pupil, but well enough that your explanation shines through the code and is unmistakable. In other words, it requires creativity, mastery, intelligence, and clarity of thought.

And, when you do things this way, unit tests and other ‘boring’ code become less boring. They let you radically alter the internal mechanism of an algorithm without changing its correctness. They let you conduct time trials on the various components as you go to ensure that you’re not sacrificing performance. And, they document further how to use your code and clarify its purpose. They’re no longer superfluous impositions but tools in your arsenal for solving problems and excelling at what you do. With a suite of unit tests and refactored code, you’re able to go from bubble sort to quick sort knowing that you’ll get immediate feedback if something goes wrong, allowing you to focus exclusively on a slick algorithm. They’ll even allow you to go off tilting at tantalizing windmills like an unprecedented linear time sorting — hey, if the tests run in N time and all go green, you’re up for some awards and speaking engagements. For all that cleverness, you ought to get something.

So Why is “Cleverness” Bad?

What do they mean that cleverness is bad, anyway? Why say something like that? The aforementioned Bob Martin, in a video presentation I once watched, said something like “you know you’re looking at good code when someone reading it is saying ‘uh-huh’, ‘uh-huh’, ‘yep’, ‘makes sense’, ‘uh-huh'”. Contrast this with code that you see where your first reaction is “What on Earth…?!?” That is often the reaction to non-functional or incorrect code, but it is just as frequently the reaction to ‘clever’ code.

The people who believe in the idea of avoiding ‘clever’ code are talking about Rube-Goldbergian code, generally employed to work around some hole in their knowledge. This might refer to someone who defines 500 functions containing 1 through 500 if statements because he isn’t aware of the existence of a “for” loop. It may be someone who defines and heavily uses something called IndexedList because he doesn’t know what an array is. It may be this, this, this or this. I’ve seen this in the wild, where I was looking at someone’s code and I said to myself, “for someone who didn’t know what a class is, this is a fairly clever oddball attempt to replicate the concept.”

The term ‘clever’ is very much tongue-in-cheek in the context of clean coding and programming wisdom. It invariably means a quixotic, inferior solution to a problem that has already been solved and whose solution is common knowledge, or it means a needlessly complex, probably flawed way of doing something new. Generally speaking, the only person who thinks that it is actually clever, sans quotes, is the person who did it and is proud of it. If someone is truly breaking new ground, that solution won’t be described as ‘clever’ in this sense, but probably as “innovative” or “ground-breaking” or “impressive”. Avoiding ‘clever’ implementations is about avoiding pointless hacks and bad reinventions of the wheel — not avoiding using one’s brain. If I were coining the phrase, I’d probably opt for “cute” over “clever”, but I’m a little late to the party. Don’t be cute — put the considerable brainpower required for elegant problem solving to problems that are worth solving and that haven’t already been solved.

By

Links As Buttons With CSS

I’ve recently started working on my home automation web server in earnest, and am trying to give it a nice look and feel. This is made more interesting by the fact that I’m developing a site intende specifically to be consumed by desktops, laptops, tablets and phones alike. With these constraints, it becomes important to have big, juicy click targets for users since there’s nothing more irritating than trying to “click” tiny hyperlinks on a phone.

To do this, I decided that what I wanted was a button. But, I’m using Spring MVC and I want to avoid form submissions for navigation and handle that through hyperlinking. So, after some experimentation, I came up with the following via a composite of things on other sites and my own trial and error tweaking:


/* This is for big content nav buttons */
a.button {
     padding: 5px 10px;
     background: -webkit-gradient(linear, left top, left bottom, from(#CC6633), to(#CC3300));
     background: -moz-linear-gradient(top,  #CC6633,  #CC3300);
     -moz-border-radius: 16px;
     -webkit-border-radius: 16px;
     text-shadow: 1px 1px #666;
     color: #fff;
     height: 50px;
     width: 120px;
     text-decoration: none;
}

table a.button {
    display: block;  
    margin-right: 20px;  
    width: 140px;  
    font-size: 20px;  
    line-height: 44px;  
    text-align: center;  
    text-decoration: none;  
    color: #bbb;  
}

a.button:hover {
     background: #CC3300;
}
a.button:active {
     background: -webkit-gradient(linear, left top, left bottom, from(#CC3300), to(#CC6633));
     background: -moz-linear-gradient(top,  #CC3300,  #CC6633);
}

/* These button styles are for navigation buttons - override the color scheme for normal buttons */
a.headerbutton {
    padding: 5px 10px;
    background: -webkit-gradient(linear, left top, left bottom, from(#1FA0E0), to(#072B8A));
    background: -moz-linear-gradient(top,  #1FA0E0,  #072B8A);
    -moz-border-radius: 16px;
    -webkit-border-radius: 16px;
    text-shadow: 1px 1px #666;
    color: #fff;
    height: 50px;
    width: 120px;
    text-decoration: none;
}

Nothing here is really rocket science, but it’s kind of handy to see it all in one place. The main things that I wanted to note were the gradients and border radii to make it look pretty, the webkit/mozilla stuff to make it compatible with versions of browsers not yet supporting HTML 5, shadowing and fixed height/width to complete the button effect.

In terms of client code, I’m invoking this from jsp pages:

<section id="content">
	    <a class="button" href="hello.html">Say Hello</a>
    	<a class="button" href="sample.html">Sample</a>
	</section>

And, that’s about it. Probably old hat for many, but I haven’t done any web development for a while, so this was handy for me, and it’s nice to have as reference. Hopefully, it helps someone else as well.

Edit: Forgot to include the screenshot when I posted this earlier. Here’s the end result (and those are normal a tags):

By

TDD and CodeRush

TDD as a Practice

The essence of Test Driven Development (TDD) can be summarized most succinctly as “red, green, refactor”. Following this practice will tend to make your code more reliable, cleaner, and better designed. It is no magic bullet, but you doing TDD is better at programming than you not doing it. However, a significant barrier to adoption of the practice is the (justifiable) perception that you will go slower when you start doing it. This is true in the same way that sloppy musicians get relatively proficient being sloppy, but if they really want to excel, they have to slow down, perfect their technique, and gradually speed things back up.

My point here is not to proselytize for TDD. I’m going to make the a priori assumption that testing is better than not testing and TDD is better than not TDD and leave it at that. My point here is that making TDD faster and easier would grease the skids for its broader adoption and continue practice.

My Process with MS Test

I’ve been unit testing for a long time, testing with TDD for a while, and also using CodeRush for a while. CodeRush automates all sorts of refactorings and generally makes development in Visual Studio quicker and more keyboard driven. As much as I love it and stump for it, though, I had never gotten around to using its test runner until recently.

I saw the little test tube icons next to my tests and figured, “why bother with that when I can just hit Ctrl-R, T to execute my MS Test tests in scope?” But a couple of weeks ago, I tried it on a lark, and I decided to give it a fair shake. I’m very glad that I did. The differences are subtle, but powerful, and I find myself a more efficient TDD practitioner for it.

Here is a screenshot of what I would do with the Visual Studio test runner:

I would write a unit test, and hit Ctrl-R, T, and the Visual Studio test results window would pop up at the bottom of my screen (I auto-hide almost everything because I like a lot of real estate for looking at code). For the first run, the screenshot there would be replaced with a single failing test. Then, I would note the failure, open the class and make changes, switch back to the unit test file, and run all of the tests in the class, producing the screenshot above. When I saw that this had worked, I would go back to the class and look either to refactor or for another failing test to add for fleshing out my class further.

So, to recap, write a test, Ctrl-R, T, pop-up window, dismiss window, switch to another file, make changes, switch back, Ctrl-R, T, pop-up window, dismiss pop-up window. I’m very good at this in a way that one tends to be good at things through large amounts of practice, so it didn’t seem inefficient.

If you look at the test results window too, it’s often awkward to find your tests if you run more than just a few. They appear in no particular order, so seeing particular ones involves sorting by one of the columns in the class. There tends to be a lot of noise this way.

Speeding It Up With CodeRush

Now, I have a new process, made possible by the way CodeRush shows test pass/fail:

Notice the little green checks on the left. In my new process, I write a test, hit Ctrl-T, R, and note the failure. I then switch to the other class and make it pass, at which time, I hit Ctrl-T, L (which I have bound to “repeat last test run”) before going back to the test class. As that runs, I switch to the test class and see that my test now passed (I can also easily run all tests in the file if I choose). Now, it’s time to write my next test.

So, to recap here, it’s write a test, Ctrl-T, R, switch to another file, make changes, Ctrl-T, L, switch back. I’ve completely eliminated dealing with a pop-up window and created a situation where my tests are running as I’m switching between windows (multi-tasking at its finest).

In terms of observing results, this has a distinct advantage as well. Instead of the results viewer, I can see green next to the actual code. That’s pretty powerful because it says “this code is good” rather than “this entry is good, and if you double click it, you can see which code is good”. And, if you want to see a broader view than the tests, you can see green next to the class and the namespace. Or, you can launch the CodeRush test runner and see a hierarchical view that you can drill into, rather than a list that you can awkwardly sort or filter.

Does This Really Matter?

Is shaving off a this small an amount of time worth it? I think so. I run tests dozens, if not hundreds of times per day. And, as any good programmer knows, shaving time off of your most commonly executed tasks is at the heart of optimizing a process. And, if we can shave time off of a process that people, for some reason, view as a luxury rather than a mandate, perhaps we can remove a barrier to adopting what should be considered a best practice — heck, a prerequisite for being considered a professional, as Uncle Bob would tell you.

Acknowledgements | Contact | About | Social Media