DaedTech

Stories about Software

By

Mock.Of() and Mock.Get() in Moq

Today, I’d like to highlight a couple of features of Moq that I didn’t know about until relatively recently (thanks to a recent google+ hangout with Moq author, Daniel Cazzulino). Since learning about these features, I’ve been getting a lot of mileage out of them. But, in order to explain these two features and the different paradigm they represent, let me reference my normal use of Moq.

Let’s say we have some class PencilSharpner that takes an IPencil and sharpens it, and we want to verify that this is accomplished by setting the Pencil’s length and sharpness properties:

So, I create a test double for the pencil, and I do some setup on it, and then I pass it into my sharpener, after which I verify that the sharpener mutates it in an expected way. Fairly straight forward. I create the double and then I manipulate its setup, before passing its object in to my class under test. (Incidentally, I realize that I could call “SetupAllProperties()”, but I’m not doing that for illustrative purposes).

But, sometimes I’d rather not think of the test double as a double, but just some object that I’m passing in. That is, perhaps I don’t need to invoke any setup on it, and I just want to reason about the actual proxy implementation, rather than stub.object. Well, that’s where Mock.Of<>() comes in:

Much cleaner, eh? I never knew I could do this, and I love it. In many tests now, I can reason about the object not as a Mock, but as a T, which is an enormous boost to readability when extensive setup is not required.

Ah, but Erik, what if you get buyer’s remorse? What if you have some test that starts off simple and then over time and some production cycles, you find that you need to verify it, or do some setup. What if we have the test above, but the Sharpen() method of PencilSharpener suddenly makes a call to a new CanBeSharpened() method on IPencil that must suddenly return true… do we need to scrap this approach and go back to the old way? Well, no, as it turns out:

Notice the third line in this test. Mock.Get() takes some T and grabs the Mock containing it for you, if applicable (you’ll get runtime exceptions if you try this on something that isn’t a Mock’s object). So, if you want to stay in the context of creating a T, but you need to “cheat”, this gives you that ability.

The reason I find this so helpful is that I tend to pick one of these modes of thinking and stick with it for the duration of the test. If I’m creating a true mock with the framework — an elaborate test double with lots of customized returns and callbacks and events — I prefer to instantiate a new Mock(). If, on the other hand, the test double is relatively lightweight, I prefer to think of it simply as a T, even if I do need to “cheat” and make the odd setup or verify call on it. I find that this distinction aids a lot in readability, and I’m taking full advantage. I realize that one could simply retain a reference to the Mock and another to the T, but I’m not really much of a fan (though I’m sure I do it now and again). The problem with that, as I see it, is that you’re maintaining two levels of abstraction simultaneously, which is awkward and tends to be confusing for maintainers (or you, later).

Anyway, I hope that some of you will find this as useful as I did.

By

Recognizing Test Smells

Tonight, I was plodding through the process of getting to know MongoDB and its java driver, TDD style. This is an interesting experience, as most of one’s development is going to involve technologies with which one is familiar and working out designs. This is classic problem isolation — all of your tools are familiar, and you have only to work out the design itself. In this case, I have two problems – the technology and the design aren’t familiar. In situations like this, it’s even more important to follow your nose when it comes to smells in code or tests, as you’re really flailing around in the dark. These may be the only things that guide a technology initiate to good practice short of taking a class or reading some books (and even then, they’re important, because you can only learn so much without getting your hands dirty.

So, I found myself with the following code during TDD (not a finished product):

public class LightServiceMongoImpl implements LightService {

	public static final String ROOM_KEY = "room";
	
	public static final String EQUALS_KEY = "$eq";
	
	private DBCollection _collection;
	
	public LightServiceMongoImpl(DBCollection collection) {
		if(collection == null)
			throw new IllegalArgumentException("Collection cannot be null.");
		
		_collection = collection;
	}

	@Override
	public Collection<Light> getLightsInRoom(String roomName) {
		
		ArrayList<Light> myLights = new ArrayList<Light>();
		
		BasicDBObject myQuery = new BasicDBObject();
		myQuery.put(ROOM_KEY, new BasicDBObject(EQUALS_KEY, roomName));
		DBCursor myCursor = _collection.find(myQuery);
		
		while(myCursor != null && myCursor.hasNext()) {
			myLights.add(new Light("asdf", "1"));
		}
		
		return myLights;
	}
}

I’m in the middle of developing a query that will take the following document structure and parse it into a series of Light objects:

{ "_id" : ObjectId("4f7d2ba6fd5a306d82687d48"), "room" : "Den" }
{ "_id" : ObjectId("4f7d2baafd5a306d82687d49"), "room" : "Foyer" }
{ "_id" : ObjectId("4f7d2fdcfd5a306d82687d4a"), "room" : "Master Bedroom" }
{ "_id" : ObjectId("4f7d301afd5a306d82687d4b"), "room" : "Guest Bedroom" }
{ "_id" : ObjectId("4f7d2b98fd5a306d82687d47"), "code" : "A", "lights" : [ { "name" : "Overhead", "code" : "1" } ], "room" : "Kitchen" }

Each document corresponds to a room with a room name, a room code, and 0 to n lights, each of which are sub-documents. This may not be the ideal structure for what I’m trying to accomplish, but it’s a working structure, and these are growing pains.

Clearly, my code doesn’t yet work, but all tests so far pass. I looked at this class critically and decided I’d been a bit lax in the “refactor” portion of “red-green-refactor” and getLightsInRoom() was getting a little unwieldy. So, I refactored to this:

public class LightServiceMongoImpl implements LightService {

	public static final String ROOM_KEY = "room";
	
	public static final String EQUALS_KEY = "$eq";
	
	private DBCollection _collection;
	
	public LightServiceMongoImpl(DBCollection collection) {
		if(collection == null)
			throw new IllegalArgumentException("Collection cannot be null.");
		
		_collection = collection;
	}

	@Override
	public Collection<Light> getLightsInRoom(String roomName) {
		
		ArrayList<Light> myLights = new ArrayList<Light>();
		
		DBCursor myCursor = getCursorForRoomName(roomName);
		
		while(myCursor != null && myCursor.hasNext()) {
			myLights.add(new Light("asdf", "1"));
		}
		
		return myLights;
	}

	/**
	 * Given a room name, get a cursor that matches
	 * @param roomName - Name of the room for which to get a cursor
	 * @return The cursor retrieved
	 */
	private DBCursor getCursorForRoomName(String roomName) {
		
		BasicDBObject myQuery = buildRoomNameQuery(roomName);
		DBCursor myCursor = _collection.find(myQuery);
		
		return myCursor;
	}

	/**
	 * Construct the query for a given room name
	 * @param roomName - room name for which to build the query
	 * @return - query object in question
	 */
	private BasicDBObject buildRoomNameQuery(String roomName) {
		
		BasicDBObject myQuery = new BasicDBObject();
		myQuery.put(ROOM_KEY, new BasicDBObject(EQUALS_KEY, roomName));
		
		return myQuery;
	}
}

Ah, much better. Again, I don’t know any design practices, but separating the parsing of the returned document from the construction of the query from the retrieval of the cursor seems like it is likely to come in handy later if there’s a need to divvy up responsibilities. So, I turned to my most recent test (it’s a passing test):

@Test
public void returns_collection_with_count_of_one_when_matching_record_has_lights_collection_with_one_element() {

	BasicDBObject myLight = new BasicDBObject();
	myLight.put("name", "overhead");
	myLight.put("light", "1");
	
	ArrayList<BasicDBObject> myLights = new ArrayList<BasicDBObject>();
	myLights.add(myLight);
				
	BasicDBObject myRoom = new BasicDBObject();
	myRoom.put("lights", myLights);  
	
	DBObject myMockDatabaseObject = mock(DBObject.class);
	Mockito.when(myMockDatabaseObject.get(RoomServiceMongoImpl.ROOM_NAME_KEY)).thenReturn(myRoom);
	
	DBCursor myMockCursor = mock(DBCursor.class);
	Mockito.when(myMockCursor.next()).thenReturn(myMockDatabaseObject).thenReturn(myMockDatabaseObject).thenReturn(null);
	Mockito.when(myMockCursor.hasNext()).thenReturn(true).thenReturn(false);
						
	DBCollection myMockCollection = PowerMockito.mock(DBCollection.class);
	Mockito.when(myMockCollection.find(any(DBObject.class))).thenReturn(myMockCursor);
	
	LightServiceMongoImpl myService = buildTarget(myMockCollection);
	
	assertEquals(1, myService.getLightsInRoom("asdf").size());
}

Yeesh. I had been so focused on figuring out the MongoDB API that I had allowed myself to be the proverbial boiled frog. That’s a hideous test, and I see that I’m starting to duplicate logic like that in a few of my tests. I had written this off as beyond my control because the Mongo API is such that I need a collection in order to get a cursor, which will give me a database object. If these data access classes (called services temporarily until I set up a proper domain) don’t retain a reference to the collection, they can’t requery the database.

But, that reasoning is no good. The fact that some API works in a certain way is no excuse for me to violate the Law of Demeter. I’m storing a reference to a collection so that I can ask it for a cursor, which I then ask for a DB object. What I really want is a DBObject to parse, and that’s it. It’s time for me to introduce some other class that encapsulates all of that and returns DBObjects to me, which I will parse into my DTOs. I have a sneaking suspicion that doing this will make my test setup much less ugly.

So, on a hunch, I’m going to split responsibilities here such that my service invokes an API that takes a query as an argument and returns a series of DBObject:

public interface MongoQueryService {

	void ExecuteQuery(BasicDBObject query);
	
	Boolean HasNext();
	
	DBObject getNext();
}

Now, I bet I can use this to make the class under test a bit more focused and my test setup a lot less painful. Here is my refactored class:

public class LightServiceMongoImpl implements LightService {

	public static final String ROOM_KEY = "room";
	
	public static final String EQUALS_KEY = "$eq";
	
	private MongoQueryService _service;
	
	public LightServiceMongoImpl(MongoQueryService service) {
		if(service == null)
			throw new IllegalArgumentException("Collection cannot be null.");
		
		_service = service;
	}

	@Override
	public Collection<Light> getLightsInRoom(String roomName) {
		
		ArrayList<Light> myLights = new ArrayList<Light>();
		
		_service.executeQuery(buildRoomNameQuery(roomName));
		
		while(_service.hasNext()) {
			myLights.add(new Light("asdf", "1"));
		}
		
		return myLights;
	}

	/**
	 * Construct the query for a given room name
	 * @param roomName - room name for which to build the query
	 * @return - query object in question
	 */
	private BasicDBObject buildRoomNameQuery(String roomName) {
		
		BasicDBObject myQuery = new BasicDBObject();
		myQuery.put(ROOM_KEY, new BasicDBObject(EQUALS_KEY, roomName));
		
		return myQuery;
	}
}

The collection is gone, as is the intermediate cursor object. In its stead I have the new query service interface that I’ve defined (I’m going to sort out the naming and package structure when I have a better grasp on things). Now, let’s take a look at the refactored test:

@Test
public void returns_collection_with_count_of_one_when_service_hasNext_is_true() {

	MongoQueryService myMockService = mock(MongoQueryService.class);
	Mockito.when(myMockService.hasNext()).thenReturn(true).thenReturn(false);
	
	LightServiceMongoImpl myService = buildTarget(myMockService);
	
	assertEquals(1, myService.getLightsInRoom("asdf").size());
}

Ah… I love the smell of deleted code in the (very early) morning. Smells like… victory! Now, granted, the implementation is still obtuse, and at some point, I’m going to have to deal with the fact that I probably won’t build a light for a null return value from the query service, which might mean revisiting this test, but I’d much rather start simple and add complexity than start complex and add complexity or, worse yet, start complex and have unnecessary complexity.

This new class is substantially easier to reason about. When clients call it, I build a query object to get what I want, tell some API to execute the query, and then iterate through the results. I’m no longer managing mixed levels of abstraction, handling a database collection, a database object iterator, and database object parsing in one flat class. Instead, I handle query construction and the parsing of query results, which is the same level of abstraction. And, for my good faith refactoring effort, I’m rewarded with a wonderfully simple unit test to run. Now, I can write tests about the query I’m constructing and handling the results of the query (and, if I decide I need to decouple query construction and result processing, which seems like a wise step to do at some point, it will be much easier).

The lesson here is to keep your nose open not just for smells in your code, but for smells in your tests. In my code, I had a law of demeter violation (and will probably have another one when I implement the query service, but I’ll deal with that during another refactor cycle). In my test, I had burgeoning test setup. In general, this can’t be ignored, but it’s especially important not to ignore this when you don’t really know what you’re doing. I don’t know enough about MongoDB, Mongo’s Java driver, JUnit, Mockito, etc to have the luxury of knowing when a smell is indicative of a problem and when it’s not. In this scenario, I have to be very careful to avoid smells if I’m going to have any hope of creating a good design on the fly while teaching myself multiple technologies.

In a similar situation, I’d advise you to take the same approach. My two cents, anyway.

By

Upgrading TFS from SQLExpress

Back Story

Some time back, I setup Team Foundation Server (TFS) on a server machine more or less dedicated to the cause. This was to test drive it to consider it as a replacement for legacy source control, requirements management, deployment, etc. Since it was a trial, run, I opted for keeping setup simpler initially, reasoning that I could expand later if I so chose. As a result, I didn’t bother with Sharepoint setup initially, and I allowed the default installation of a database, which was SQLExpress.

Once I got used to the features of the basic installation, I wanted to test out the more advanced ones, but this has proven annoyingly difficult. Setting up Sharepoint and trying to retrofit it on existing projects was an enormous hassle, and I wound up having to delete my old projects and ‘recrate’ them with Sharepoint. Granted, these were playpen sorts of projects, but there was actual work in them and they were useful — just not primetime. So, losing them would be a hassle. And besides, it’s kind of hard to fully test a system without using it to do something useful.

After letting the dust settle a bit on that annoyance, I decided I’d switch from SQLExpress to SQL Standard to get the full benefit of TFS reporting services (via SQL reporting services). This was another huge pain point, and I’m going to document below what I had to do. Basically, it involved backing up all the SQL Express databases, installing SQL Server 2008 standard, and importing those backups. This guide is by no means comprehensive and there are a lot of moving parts, so this isn’t exactly a walk through, but hopefully it will help save someone at least some annoyance in this battle and maybe shave a little time off.

Steps

These steps were taken as I followed the instructions at this link (my thanks to the blogger, as I found this to be the most helpful guide). I documented below any place I had to deviate from these steps (or google around to figure out how to do one).

  1. Downloaded and burned SQL Server 2008 R2 as an ISO on DVD
  2. Got annoyed because the DVD+ I burned doesn’t seem to be recognized by the server’s disc drive. (Just a problem with the server’s hardware, I think)
  3. Copied and extracted the ISO and ran the install locally.
  4. Got annoyed because I got some weird error message, which I resolved by following the answer here from Russel Fields.
  5. Ran the installer, accepting the defaults until I got to the Feature Selection step. Here, I definitely need Analysis and Reporting Services, which is the point, so I checked those. But, I just went hog wild and checked everything since I’m learning the hard way with TFS to install random features first and ask questions later.
  6. Then I had to give account permissions for a bunch of things, but the install was fairly unremarkable.
  7. The “solid hosting” link I followed was generally accurate, but I did need to figure a few things out.
  8. It didn’t detail exactly how to perform backup and restore — I did this by using SQL Server Management Studio. This isn’t particularly intuitive. I had to go into the databases folder, right-click on each database, and choose “Tasks->Backup”. I had to repeat this for each database and then do a restore in the new SQL server for each database.
  9. When you’re actually doing the restore, what you want is “from device”. “Device” is disk, evidently.
  10. There was a lot of bad noise when trying to restore. I kept getting messages about files in use and suggestions to use the (nonexistent) with replace option. Since I didn’t relish the command line, I got around this bit of idiocy by adding a “1” to the end of the target database name and then deleting the one after the succesful restore. Apparently management studio’s mind is blown if you want your new database on your new server to have the same name as the database you’re importing. A real corner case, I guess.
  11. I had trouble again when re-installing application tier. I kept getting a message about no usable databases. I resolved this by looking in the logins of the old and new database and basically re-creating the permissions from the old in the new. In my case, I needed to create some local login called “wssservice”, and that did the trick.
  12. After that, I had to actually know the password to the “wssservice” account, which was interesting because I wasn’t the one who had chosen this password and he wasn’t around at the time. I never figured this out, but I did use my admin credentials to reset it.
  13. Finally, when trying to reattach, I had to do this for some reason.
  14. And, that was it. After rebooting the server for good measure, and re-activating my automated builds, I was able to check in, check out, build, etc.

At the end of all of this, the lesson that I took away for future reference with TFS is when installing it, install all of the bells and whistles if there is any doubt. Adding functionality after the install is painful, so it’s better to install things if there’s any chance you’ll need them.

By

Behind the Scenes Feed Magic

Very quickly, I just installed a plugin to consolidate subscription through my feedburner account. Basically, my feedburner account will become aware of all subscribers now, including those who simply access daedtech.com/feed or those who use the subscription buttons on the left. As I understand it, this should have no effect on subscriptions whatsoever. Nonetheless, if it does — if anyone finds they’re having trouble with the feeds — please leave a comment and I’ll look into it.

Thanks, and thanks for reading!

By

The Slippery Scope

Roll Up Your Sleeves

Have you ever stared into a code mess? I don’t mean some little mess, but rather the kind of mess that, as Nietzsche put it, stares back into you if you stare at it for too long. You can almost feel it making you worse at programming. It’s the kind of thing that everyone agrees is a mess with no dissent. It’s the 10,000 line class or the class titled GlobalVariables that houses hundreds of them. It’s the kind of mess that leads to someone having you add a few more lines to that Godzilla class or a few more globals to the mix in order to get things done, and hating yourself as you do it. I think you’ve probably seen it. It’s an experience you don’t quickly forget, and one you may even commemorate with a submission to The Daily WTF.

There are a variety of ways that people respond to this state of affairs. Some will shrug and adopt a “when in Rome” attitude as they blithely break another window in this dilapidated building. Others will refuse to make things worse (at least inasmuch as the group political dynamic allows) without making things better either. Still others will apply the so-called “Boy Scout Rule” toward small, incremental improvements, fixing the application a few squashed globals and factored out methods at a time. But, there is another category of response to this, and it is the theme of my post today. That category is the rolling up of sleeves in an effort to “do this right, once and for all.”

If Some Is Good, More Is Better?

The people who take this latter approach, sticking with the Nietzsche theme, probably consider themselves uber-boy-scouts. Boy scouts aim to leave the code a little better than they find it, but uber-boy-scouts aim to leave the code a lot better. Curiously, however, boy scouts tend to get a lot done over time toward improvement, whereas uber-boy-scouts generally accomplish nothing. What gives? If some is good, isn’t more better?

Well, as it turns out, not always. Cupcakes, nighttime sleep aids, and bourbon all come to mind as counter examples of the premise, and boy-scout methodology tends to fit here as well. But, unlike the others the issue isn’t “some things are better in moderation.” There’s a subtly different issue at play here. The problem isn’t that too much fixing is bad, but rather that the fixing never actually starts happening in the first place.

And why not? Well, because of the slippery scope. The boy scout says “I’m going to eliminate these two global variables and call it a day. Maybe tomorrow, while working on my next feature, I’ll knock out three more.” The uber-boy-scout says:

While I’m doing this, I’ll knock out these two global variables. But really, if I’m going to do that, I should knock out these other ten. And, besides that, we really should break this global class up into 5 smaller globals classes, which will involve changing some namespaces around. Once we do that, we’re going to have to touch just about every class in the application. Wow, this is a huge effort. And, not to mention that the Godzilla class is using these globals and that’s part of what makes it so big, so we really also need to factor a few methods out of Godzilla into a new class in a new folder. I’d better clear my calendar this week. And, as long as we’re creating that folder, we should probably also add these other three classes, which could implement a state pattern for the GUI. And, as long as we’re doing that…

…wow, I just don’t think this refactoring is feasible right now. We’ll have to revisit this in vNext.

What makes this even worse is that, after careening down this slippery slope of needed changes, the uber-boy-scout actually tends to turn into the “when in Rome” developer, reasoning that there’s really nothing he can do right now but go along to get along. The uber-boy-scout has sort of an all-or-nothing mentality that if he can’t fix all of the broken windows in the next ten minutes, he might as well break some more so that he has an even stronger reminder to fix them all in 10 minutes at some date to be determined later.

The Fallacy Of It All

For those not familiar with rhetorical logic, slippery slope is a logical fallacy.

The Slippery Slope is a fallacy in which a person asserts that some event must inevitably follow from another without any argument for the inevitability of the event in question. In most cases, there are a series of steps or gradations between one event and the one in question and no reason is given as to why the intervening steps or gradations will simply be bypassed.

Slippery Scope, as I’m coining it, is the engagement of this fallacy in the domain of software refactorings. That is, it takes on the form “if we’re going to refactor A, we might as well/have to/should refactor B, and if we’re going to refactor B, we might as well/have to/should refactor C, etc, ad nauesum“. It is also a form of the false dilemma fallacy (e.g. “if you’re not with us you’re against us”) in that it basically asserts that either all refactorings or none must be performed.

But whatever you want to call it, the reasoning is demonstrably fallacious. And, it’s also obviously fallacious, so why do people do this? My hypothesis is that there are two main motivations for uber-boy-scouting. The first is earnest and benign — the person in question is rather excitable and thinks a mile per minute. And, once the refactoring cat is out of the bag, it’s hard to say “no, I’m not going to fix everything right now.” The uber-boy-scout has discovered a problem and can’t bear the thought of not addressing it forthwith. In situations like these, the uber-boy-scout can probably be convinced to ratchet it down to regular boy-scout.

In the other scenario, which is probably far less common, I think the motivation is laziness and mild deception (perhaps even self-deception). In this scenario, the uber-boy-scout actually prefers the status quo and its mess, but knows that he shouldn’t. So, he tosses out the refactoring idea as a false flag, chaining together all of the work that would be required to fix the mess and concluding with the flourish that it’s just not feasible, so… “when in Rome.” He gets the ‘benefit’ of continuing to play in the mess while not suffering the reputation blowback of actually liking the mess. Dealing with a passive-aggressive scheme like this is significantly trickier and group realpolitik dynamics are beyond the scope of this post.

Concentrate on Fixing Your Own Tendencies

So, rather than talk about how to fix others, I think we could all benefit from looking at ourselves and doing an honest assessment as to whether or not we exhibit uber-boy-scout tendencies. Do we fall into a nice rhythm of methodically and constantly factoring the code toward improvement, or do we tend more toward these all-or-nothing, heroic refactorings? If the answer is the latter, my advice is to reconsider your approach. Even if you pull off these refactoring coups, it’s probably at the expense of late nights, nervous team members, and a lot of stress. Be conscientious, by all means. Don’t be a “when it Rome” developer, but also bear in mind that the city in question was not built in a day.

By

Decorator

Quick Information/Overview

Pattern Type Structural
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Easy

Up Front Definitions

  1. Decorator: An object that is both an inherits from and operates on another object.
  2. Component: Abstract target base class (or interface) of the decorator pattern.
  3. Target: For the purposes of this post, I use this term interchangeably with Component.

The Problem

Let’s say that you have to model employees at a company, and all employees have a name, an hourly wage, and a bio that describes them. Furthermore, these items are going to be determined by the type of employee that each employee is (they can be Workers, Managers, or Executives for now, but there might be more later). Since this begs for an extensible, polymorphic structure, you create the following class structure and program to take it for a test drive:

public abstract class Employee
{
    public string Name { get; protected set; }

    public double HourlyWage { get; protected set; }

    public string EmployeeBio { get; protected set; }

    protected Employee(string name, double hourlyWage, string bio)
    {
        HourlyWage = hourlyWage;
        Name = name;
        EmployeeBio = bio;
    }
}

public class Worker : Employee
{
    public Worker(string name) : base(name, 12.5, "Rank and File") { }
}

public class Manager : Employee
{
    public Manager(string name) : base(name, 25.0, "Line Manager") { }
}

public class Executive : Employee
{
    public Executive(string name) : base(name, 50.0, "Bigwig") { }
}

class Program
{
    static void Main(string[] args)
    {
        var myEmployees = new List();
        myEmployees.Add(new Worker("Jim Halpert"));
        myEmployees.Add(new Manager("Michael Scott"));
        myEmployees.Add(new Executive("Jan Levenson"));

        PrintAllEmployees(myEmployees);
    }

    private static void PrintAllEmployees(ListmyEmployees)
    {
        foreach (var myEmployee in myEmployees)
        {
            Console.WriteLine(String.Format("{0}, {1}, makes {2} per hour.", myEmployee.Name, myEmployee.EmployeeBio, myEmployee.HourlyWage));
        }
        Console.ReadLine();
    }
}

Simple enough. This is nice and clean because later, if we want to add a new type of employee (maybe Toby from HR), we can just create a new type of employee through inheritance, and the print function will hum happily along. This is classic Open/Closed Principle stuff. Different types of employees define their own attributes, and there are no ugly enums called “EmployeeType” — life is good.

Now, let’s say that an industrious project manager gets wind of the rave reviews your office modeling program has gotten for its usability and clean design and decides to hitch his wagon to your effort. You get a request coming in that employees be able to be distinguished as “high performers”. Any employee with the “high performer” designation should be paid 10% more than non-high performer counterparts, and this recognition should be reflected in the employee’s bio. So, you change the code to look like this:

public abstract class Employee
{
    public string Name { get; protected set; }

    public double HourlyWage { get; protected set; }

    public string EmployeeBio { get; protected set; }

    protected Employee(string name, double hourlyWage, string bio, bool isHighPerformer = false)
    {
        HourlyWage = hourlyWage;
        Name = name;
        EmployeeBio = bio;
        if (isHighPerformer)
        {
            HourlyWage *= 1.10;
            EmployeeBio = EmployeeBio + ", High Performer";
        }
    }
}

public class Worker : Employee
{
    public Worker(string name, bool isHighPerformer = false) : base(name, 12.5, "Rank and File", isHighPerformer) { }
}

public class Manager : Employee
{
    public Manager(string name, bool isHighPerformer = false) : base(name, 25.0, "Line Manager", isHighPerformer) { }
}

public class Executive : Employee
{
    public Executive(string name, bool isHighPerformer = false) : base(name, 50.0, "Bigwig", isHighPerformer) { }
}

class Program
{
    static void Main(string[] args)
    {
        var myEmployees = new List();
        myEmployees.Add(new Worker("Jim Halpert", true));
        myEmployees.Add(new Manager("Michael Scott"));
        myEmployees.Add(new Executive("Jan Levenson"));

        PrintAllEmployees(myEmployees);
    }

    private static void PrintAllEmployees(ListmyEmployees)
    {
        foreach (var myEmployee in myEmployees)
        {
            Console.WriteLine(String.Format("{0}, {1}, makes {2} per hour.", myEmployee.Name, myEmployee.EmployeeBio, myEmployee.HourlyWage));
        }
        Console.ReadLine();
    }
}

Hmmm… polymorphic structure still looks good, and the optional parameter even allows you to implement this functionality as a non-breaking change for client code, but the constructors are starting to get a little noisy and unfocused. It’s also not necessarily awesome that you had to touch every class constructor in the polymorphic structure, but whatever, project management is happy and hopefully they don’t ask for more stuff like that.

And, turns out they don’t. Instead, they want to be able to record the fact that some employees are members of the corporate “safety team”, responsible for knowing how fire extinguishers work and counting people during fire drills and whatnot. So, during the normal print, safety team membership should be reflected in the bio, but it should also be possible to see what a safety team member’s team responsibility is.

Here we have a classic inheritance situation… sort of. Safety team members are still normal employees (meaning any safety team member must be worker, manager or exec), so they can’t inherit from the abstract base employee class. And, it doesn’t make sense to add safety team information and implementation to any existing classes, since none of them necessarily represent safety team members. So, you suppose they’ll just have to inherit from the three classes we’ve defined so far, like this:

public abstract class Employee
{
    public string Name { get; protected set; }

    public double HourlyWage { get; protected set; }

    public string EmployeeBio { get; protected set; }

    protected Employee(string name, double hourlyWage, string bio, bool isHighPerformer = false)
    {
        HourlyWage = hourlyWage;
        Name = name;
        EmployeeBio = bio;
        if (isHighPerformer)
        {
            HourlyWage *= 1.10;
            EmployeeBio = EmployeeBio + ", High Performer";
        }
    }
}

public class Worker : Employee
{
    public Worker(string name, bool isHighPerformer = false) : base(name, 12.5, "Rank and File", isHighPerformer) { }
}

public class SafetyTeamWorker : Worker
{
    public string SafetyDescription { get; set; }

    public SafetyTeamWorker(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Safety Team Member";
    }
}

public class Manager : Employee
{
    public Manager(string name, bool isHighPerformer = false) : base(name, 25.0, "Line Manager", isHighPerformer) { }
}

public class SafetyTeamManager : Manager
{
    public string SafetyDescription { get; set; }

    public SafetyTeamManager(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Safety Team Member";
    }
}

public class Executive : Employee
{
    public Executive(string name, bool isHighPerformer = false) : base(name, 50.0, "Bigwig", isHighPerformer) { }
}

public class SafetyTeamExecutive : Executive
{
    public string SafetyDescription { get; set; }

    public SafetyTeamExecutive(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Safety Team Member";
    }
}

class Program
{
    static void Main(string[] args)
    {
        var myEmployees = new List();
        myEmployees.Add(new Worker("Jim Halpert", true));
        myEmployees.Add(new SafetyTeamManager("Michael Scott"));
        myEmployees.Add(new Executive("Jan Levenson"));

        PrintAllEmployees(myEmployees);
    }

    private static void PrintAllEmployees(ListmyEmployees)
    {
        foreach (var myEmployee in myEmployees)
        {
            Console.WriteLine(String.Format("{0}, {1}, makes {2} per hour.", myEmployee.Name, myEmployee.EmployeeBio, myEmployee.HourlyWage));
        }
        Console.ReadLine();
    }
}

Uh, oh. This is starting to smell. You’re not thrilled about creating three classes when what you really wanted was one class, and you’re really not thrilled about the fact that the three classes are copy and paste jobs of one another. But, no time for all that clean code claptrap now, you’re getting more requirements in from the project manager and you’ve got to get things done. They system also has to be able to track whether employees are on the softball team and, if so, what position they play. So that means, another three copy and paste classes. Except, wait a minute, safety team members can be softball players too. So, we really need to add another 6 copy and paste classes, bringing the total to 9 copies and 3 that we started with:

public abstract class Employee
{
    public string Name { get; protected set; }

    public double HourlyWage { get; protected set; }

    public string EmployeeBio { get; protected set; }

    protected Employee(string name, double hourlyWage, string bio, bool isHighPerformer = false)
    {
        HourlyWage = hourlyWage;
        Name = name;
        EmployeeBio = bio;
        if (isHighPerformer)
        {
            HourlyWage *= 1.10;
            EmployeeBio = EmployeeBio + ", High Performer";
        }
    }
}

public class Worker : Employee
{
    public Worker(string name, bool isHighPerformer = false) : base(name, 12.5, "Rank and File", isHighPerformer) { }
}

public class SoftballPlayingWorker : Worker
{
    public string Position { get; set; }

    public SoftballPlayingWorker(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Softball Player";
    }
}

public class SafetyTeamWorker : Worker
{
    public string SafetyDescription { get; set; }

    public SafetyTeamWorker(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Safety Team Member";
    }
}

public class SoftballPlayingSafetyTeamWorker : SafetyTeamWorker
{
    public string Position { get; set; }

    public SoftballPlayingSafetyTeamWorker(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Softball Player";
    }
}

public class Manager : Employee
{
    public Manager(string name, bool isHighPerformer = false) : base(name, 25.0, "Line Manager", isHighPerformer) { }
}

public class SoftballPlayingManager : Manager
{
    public string Position { get; set; }

    public SoftballPlayingManager(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Softball Player";
    }
}

public class SafetyTeamManager : Manager
{
    public string SafetyDescription { get; set; }

    public SafetyTeamManager(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Safety Team Member";
    }
}

public class SoftballPlayingSafetyTeamManager : SafetyTeamManager
{
    public string Position { get; set; }

    public SoftballPlayingSafetyTeamManager(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Softball Player";
    }
}

public class Executive : Employee
{
    public Executive(string name, bool isHighPerformer = false) : base(name, 50.0, "Bigwig", isHighPerformer) { }
}

public class SoftballPlayingExecutive : Executive
{
    public string Position { get; set; }

    public SoftballPlayingExecutive(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Softball Player";
    }
}

public class SafetyTeamExecutive : Executive
{
    public string SafetyDescription { get; set; }

    public SafetyTeamExecutive(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Safety Team Member";
    }
}

public class SoftballPlayingSafetyTeamExecutive : SafetyTeamExecutive
{
    public string Position { get; set; }

    public SoftballPlayingSafetyTeamExecutive(string name, bool isHighPerformer = false) : base(name, isHighPerformer)
    {
        EmployeeBio += ", Softball Player";
    }
}

class Program
{
    static void Main(string[] args)
    {
        var myEmployees = new List();
        myEmployees.Add(new Worker("Jim Halpert", true));
        myEmployees.Add(new SafetyTeamManager("Michael Scott"));
        myEmployees.Add(new SoftballPlayingSafetyTeamExecutive("Jan Levenson"));

        PrintAllEmployees(myEmployees);
    }

    private static void PrintAllEmployees(ListmyEmployees)
    {
        foreach (var myEmployee in myEmployees)
        {
            Console.WriteLine(String.Format("{0}, {1}, makes {2} per hour.", myEmployee.Name, myEmployee.EmployeeBio, myEmployee.HourlyWage));
        }
        Console.ReadLine();
    }
}

Suddenly, you’ve got a train-wreck on your hands. After you track down a few hard-to-fix defects that resulted from the wrong inheritance scheme when you forgot to change “Worker” to “Manager” in one of your copy-paste implementations, the project manager tells you that there’s no time for refactoring because now you need to be able to who who is on the Party Planning Committee and who is a member of the Finer Things Club. Your 12 classes just became 36 and then 108.

The only option at this point is clearly to write a program that can automate generating this code for you. That, or retire.

So, What to Do?

As is often the case in this series of posts, the important thing is to retrace our steps and figure out where things went off the rails. And, as is even more often the case, our problems started as soon as we started duplicating code, and they got worse the more egregiously we did it. So, where do we go from here? It’s simpler in this post than in some of the others — let’s start by deleting all of these misguided inheritors and revert to the state we were in immediately after defining our optional “isHighEarner” parameter (we’ll get to that later).

Now, let’s add a class called “EmployeDecorator”. It will inherit from Employee and be abstract. If you’ll recall, when implementing the safety team requirement, we dismissed the possibility of safety team inheriting from Employee directly because safety team members also needed to be Workers, Managers, or Executives, but we’re going to introduce a way around that problem:

public abstract class EmployeeDecorator : Employee
{
    private readonly Employee _target;
    protected Employee Target { get { return _target; } }

    public EmployeeDecorator(Employee target)
    {
        _target = target;
        Name = target.Name;
        HourlyWage = target.HourlyWage;
        EmployeeBio = EmployeeBio;
    }
}

Notice that this decorator object curiously both inherits from and wraps Employee, and proceeds to seed its own properties with those of the employee. This allows the decorator to look to clients as though it is the object its wrapping. This has some powerful ramifications, as we can see here:

public class SafetyTeamMember : EmployeeDecorator
{
    public string SafetyDescription { get; set; }

    public SafetyTeamMember(Employee target) : base(target)
    {
        EmployeeBio += "Safety Team Member ";
    }
}

Now, we can add safety team member functionality to an employee simply by taking an employee and injecting it into the constructor of the concrete decorator. As we do this, we trust the decorator to apply the appropriate modifications to the employee and expose the properties that we would demand of the employee. So, in this case decorator adds bio information and when we query its bio, we get the employee’s information plus the “decoration” of the addition to the bio. On top of that, we can set this object’s “Safety Description”. In effect, we’re extending the functionality of an object without inheriting directly from it. The common ancestor is, however necessary, as this is what gives us access to the protected members of employee. Here is the client instantiation logic:

static void Main(string[] args)
{
    var myEmployees = new List();
    myEmployees.Add(new Worker("Jim Halpert", true));
    var myManager = new Manager("Michael Scott");
    myEmployees.Add(new SafetyTeamMember(myManager));
    myEmployees.Add(new Executive("Jan Levenson"));

    PrintAllEmployees(myEmployees);
}

And this results in the same printed information. Now, let’s implement our softball player requirement and set Michael Scott up as both a softball player and a safety team member.

public class SoftballPlayer : EmployeeDecorator
{
    public string Position { get; set; }

    public SoftballPlayer(Employee target) : base(target)
    {
        EmployeeBio += "Softball Player, ";
    }
}

class Program
{
    static void Main(string[] args)
    {
        var myEmployees = new List();
        myEmployees.Add(new Worker("Jim Halpert", true));
        Employee myManager = new Manager("Michael Scott");
        myManager = new SoftballPlayer(myManager);
        myEmployees.Add(new SafetyTeamMember(myManager));
        myEmployees.Add(new Executive("Jan Levenson"));

        PrintAllEmployees(myEmployees);
    }

    private static void PrintAllEmployees(ListmyEmployees)
    {
        foreach (var myEmployee in myEmployees)
        {
            Console.WriteLine(String.Format("{0}, {1} makes {2} per hour.", myEmployee.Name, myEmployee.EmployeeBio, myEmployee.HourlyWage));
        }
        Console.ReadLine();
    }
}

Now, Michael is a real Rennaisance Man, and it only took 3 extra classes rather than 12. And, that number will raise to 5, rather than 108, with the next set of project management requirements. So, we’re setting pretty, but we can do even better — we can refactor the higher performer functionality to be part of another decorator as well, to achieve this finished product:

public abstract class Employee
{
    public string Name { get; protected set; }

    public double HourlyWage { get; protected set; }

    public string EmployeeBio { get; protected set; }

    protected Employee() { }

    protected Employee(string name, double hourlyWage, string bio)
    {
        HourlyWage = hourlyWage;
        Name = name;
        EmployeeBio = bio;
    }
}

public class Worker : Employee
{
    public Worker(string name) : base(name, 12.5, "Rank and File, ") { }
}

public class Manager : Employee
{
    public Manager(string name) : base(name, 25.0, "Line Manager, ") { }
}

public class Executive : Employee
{
    public Executive(string name) : base(name, 50.0, "Bigwig, ") { }
}

public abstract class EmployeeDecorator : Employee
{
    private readonly Employee _target;
    protected Employee Target { get { return _target; } }

    public EmployeeDecorator(Employee target)
    {
        _target = target;
        Name = target.Name;
        HourlyWage = target.HourlyWage;
        EmployeeBio = target.EmployeeBio;
    }
}

public class SafetyTeamMember : EmployeeDecorator
{
    public string SafetyDescription { get; set; }

    public SafetyTeamMember(Employee target) : base(target)
    {
        EmployeeBio += "Safety Team Member, ";
    }
}

public class SoftballPlayer : EmployeeDecorator
{
    public string Position { get; set; }

    public SoftballPlayer(Employee target) : base(target)
    {
        EmployeeBio += "Softball Player, ";
    }
}

public class HighPerformer : EmployeeDecorator
{
    public HighPerformer(Employee target) : base(target)
    {
        EmployeeBio += " High Performer,";
        HourlyWage *= 1.10;
    }
}

class Program
{
    static void Main(string[] args)
    {
        var myEmployees = new List();
        Employee myWorker = new Worker("Jim Halpert");
        myEmployees.Add(new HighPerformer(myWorker));
        Employee myManager = new Manager("Michael Scott");
        myManager = new SoftballPlayer(myManager);
        myEmployees.Add(new SafetyTeamMember(myManager));
        myEmployees.Add(new Executive("Jan Levenson"));

        PrintAllEmployees(myEmployees);
    }

    private static void PrintAllEmployees(ListmyEmployees)
    {
        foreach (var myEmployee in myEmployees)
        {
            Console.WriteLine(String.Format("{0}, {1} makes {2} per hour.", myEmployee.Name, myEmployee.EmployeeBio, myEmployee.HourlyWage));
        }
        Console.ReadLine();
    }
}

Now, that weird extra parameter is gone — “high performance” is now something added to employees rather than ingrained. And, the high performance decorator illustrates nicely that decorators can serve either or both of two purposes: extending functionality and/or modifying the target (“component”).

A More Official Explanation

Turning once again to dofactory (UML diagram image came from here as well), a more formal definition of the Decorator Pattern is to:

Attach additional responsibilities to an object dynamically. Decorators provide a flexible alternative to subclassing for extending functionality.

The key thing to note in our process or refactoring toward the Decorator pattern is that we were defining a scheme whereby employees could be given ancillary responsibilities without changing their fundamental makeup. In our example, all employees have a wage, name, and bio, but not all employees are safety team members, high earners, etc. The Decorator pattern allows us to reflect this reality by attaching these ancillary responsibilities in ad hoc fashion, and without adding any properties to the Employee class or sub-classes that may not be used.

The Decorator Pattern is a means for extending a class’s functionality without either modifying that class or inheriting from that class. It takes advantage of the nature of inheritance to gain access to the class’s protected members in order to expose new state modifying behavior to its clients.

Other Quick Examples

Here are some other places that the Decorator pattern is used:

  1. GUI scenarios where a GUI component may have many tangentially related properties (e.g. a window may have scrollbars, which are not really intrinsic to the Window itself, but handy to be able to add)
  2. “Stream” objects (Java) that add behavior to application I/O by decorating the basic I/O implementation.
  3. Anywhere that there is a significant desire to extend the main functionality of an object hierarchy, such as a series of customizations to an invoice or order object.

A Good Fit – When to Use

The Decorator pattern makes sense to use if you have an object hierarchy that you want to keep pretty much as-is, but you want to add some custom, kind of one-off behavior to it. In the example above, we like the employee hierarchy and don’t really want to clutter it up, but we would like to capture the fact that employees can also occasionally have other, custom behaviors. This makes senses if we extend the example to the GUI components like windows as well. We may sometimes want Windows to have scrollbars, but that doesn’t mean there is a need to have ScrollbarWindow, HorizontalScrollbarWindow, VerticalScrollbarWindow, BothScrollbarWindow, etc for every conceivable type of window.

In fact, that type of combinatorial explosion of types is another definite call for the Decorator pattern. The pattern is a good fit for adding one-off behavior to object hierarchies, but it’s pretty much a must if you find yourself in a situation where simple inheritance would mean defining something like n! new classes for each behavior you want. If you find yourself saying “well, I’d need a HorizontalScrollbarDialogWindow and a HorizontalScrollbarNormalWindow, and a VerticalScrollbarDialogWindow, etc”, stop and give serious though to Decorator.

Square Peg, Round Hole – When Not to Use

This pattern may not be necessary for certain scenarios. Consider our working example above, and forget all of the other “hats” employees can wear but the Safety Team. If we knew this was the only other thing to be modeled, it might be worth having a SafetyTeam object that stored a collection of employee objects. This would allow all members of the safety team to be identified. Granted, you wouldn’t get it as part of the employee bio, but perhaps that could be achieved in another way. Point is, Decorator may be overkill if there isn’t much decoration going on.

Decorator also may not fit if you have no combinatorial explosion and significant control over and flexibility with the object hierarchy. It’s not an option that I personally like, and I think it’s a bit clumsy, but we could have achieved some of the functionality above with the Employee base class having properties for IsSafetyTeamMember and IsHighPerformer and IsSoftballPlayer. If we knew that we were limited to these three (and thus not building an outhouse), this rigid design would suffice.

So, the point is that having and dealing with some existing object hierarchy does not automatically mean that the Decorator pattern is a good fit. It introduces some design complexity and potential understanding issues, so it should be introduced to solve a real, current problem, rather than a hypothetical future one.

So What? Why is this Better?

As demonstrated above, Decorator makes code a lot cleaner in situations where responsibilities need to be added in ad-hoc fashion to an existing object hierarchy. This is particularly true when the new responsibilities are somewhat tangential to the main hierarchy. Using this pattern allows you to manipulate protected members of the class in question without sub-classing it directly, and to present an object to clients that looks uniform and hides the trickery that you’ve pulled.

Your inheritance hierarchies when using this pattern will remain both shallower and narrower, which definitely facilitates better understanding of code. But, you won’t have to sacrifice any flexibility to get this more manageable inheritance tree. And, in the end, you’ll save yourself the ugliness of getting into a situation where the number of classes you have are described in terms of factorial.

By

JUnit for C# Developers 8 – Obeying Demeter and Going Beyond the Tests

Last time in this series, I pulled an “Empire Strikes Back” and ended on a bit of a down note. This time around, I’d like to explore how I’ve alleviated my Law of Demeter problems, and about how fixing a code smell in my tests pushed me into a better design.

Up until now, I’ve been blogging as I go, but this one is all in the past tense — the work is done as I type this. I set out tonight with only one goal, get rid of my LOD violations, and this is where it took me.

Rethinking my Class

Recall that last time, I was passing in a database object, querying that for a collection, querying that for a cursor, and then querying the cursor for my actual database objects that I parsed and returned from the service. After a bit of trial and error and research, I decided that my service class needed to encapsulate the collection since, as best as I can tell from whatever Eclipse’s version of Intellisense is called, cursors are forward only and then you need to get another one. So, if I don’t pass in the collection at least, my service method will only work once. Fine – not thrilled about the collection.cursor.objects thing, but it’s at least pulling one LOD violation out.

I now have a handful of tests that look like this:

@Test
public void returns_room_model_with_roomName_from_database_room_key() {
	
	String myRoomName = "Rumpus Room.  Yeah, that's right.  I said Rumpus Room.";
	
	DBObject myMockDatabaseObject = mock(DBObject.class);
	Mockito.when(myMockDatabaseObject.get(RoomServiceMongoImpl.ROOM_NAME_KEY)).thenReturn(myRoomName);
	
	DBCursor myMockCursor = mock(DBCursor.class);
	Mockito.when(myMockCursor.next()).thenReturn(myMockDatabaseObject).thenReturn(myMockDatabaseObject).thenReturn(null);
	Mockito.when(myMockCursor.hasNext()).thenReturn(true).thenReturn(true).thenReturn(false);
			
	DBCollection myMockCollection = PowerMockito.mock(DBCollection.class);
	Mockito.when(myMockCollection.find()).thenReturn(myMockCursor);
	
	RoomServiceMongoImpl myService = BuildTarget(myMockCollection);
	
	assertEquals(myRoomName, myService.getAllRooms().toArray(new Room[2])[0].getRoomName());
}

and my class became:

public class RoomServiceMongoImpl implements RoomService {

	public static final String ROOM_CODE_KEY = "room_code";

	public static final String ROOM_NAME_KEY = "room";
	
	private DBCollection _collection;
	
	public RoomServiceMongoImpl(DBCollection collection) {
		_collection = collection;
	}

	@Override
	public Collection<Room> getAllRooms() {
		Collection<Room> myRooms = new ArrayList<Room>();
		
		DBCursor myCursor = _collection.find();
		while(myCursor != null && myCursor.hasNext()) {
			RoomModel myModel = buildRoomModel(myCursor.next());
			if(myModel != null)
				myRooms.add(myModel);
		}
		
		return myRooms;
	}
	
	private RoomModel buildRoomModel(DBObject roomObject) {
		Object myRoomName = roomObject.get(ROOM_NAME_KEY);
		char myRoomCode = getRoomCode(roomObject.get(ROOM_CODE_KEY));
		
		if(myRoomName != null) {
			return new RoomModel(myRoomName.toString(), null, myRoomCode);
		}
		return null;
	}

	private char getRoomCode(Object myRoomCode) {
		return myRoomCode != null && myRoomCode.toString() != null && myRoomCode.toString().length() > 0 ?
				myRoomCode.toString().charAt(0) : 0;
	}
}

A lot cleaner and more manageable following some good TDD if I do say so myself (though I may be whiffing on some finer points of the language as I’m still rusty from 2 years of mostly uninterrupted C#). I’m still not thrilled about the heavy test setup overhead, but I’ve made incremental progress.

Now, where things got interesting is in wiring this up through Spring and MongoDB. The class works in test, but I need now to figure out how to use my spring-servlet.xml to get an instance of the collection injected into my class’s constructor. I wanted to do this (1) without defining any additional code and (2) without resotring to static implementations or singletons. For (1) I’d rather leave the DB setup stuff in XML as much as possible and for (2) I try to avoid static at all costs unless there’s some compelling argument that doesn’t lean prominently on a premise of “it’s more convenient”. Static is about as flexible as a diamond.

So, here is what I did:

<!-- This is the "Mongo" object where you specify connection credentials -->
    <bean id ="mongo" class="com.mongodb.Mongo">
    	<constructor-arg index="0" type="java.lang.String" value="192.168.2.191"/>
    </bean>
    
    <!--  This is the database returned from the mongo object (the daedalus) database -->
    <bean id="mongoDatabase" factory-bean="mongo" factory-method="getDB">
    	<constructor-arg index="0" value="daedalus"/>
    </bean>
    
    <!-- This is the "house" collection, which is a collection of rooms  -->
    <bean id="mongoHouseCollection" factory-bean="mongoDatabase" factory-method="getCollection">
    	<constructor-arg index="0" value="house"/>
    </bean>

I discovered that I can use factory-bean and factory-method attributes to invoke instance methods on beans that I’d created, turning their return values into other beans. I also learned that “constructor-arg” is rather unfortunately named in that it actually just translates to “arguments to the method in question”. So, in the case of the mongoDatabase bean, I’m getting it from my mongo object’s getDB() method with a string parameter of “daeadlus”. On the whole, the beans above translate to new Mongo(“192.168.2.191″).getDB(“daedalus”).getCollection(“house”) being stored in the “mongoHouseCollection” bean, which I injected into my service. When I wired and fired it, it worked perfectly the first time.

So, this post has been a little thin on actual information about JUnit (really just the denouement to my last post), but there is a nugget in here for spring wireup, and, I think the most important lesson for me is that the design benefits to TDD go beyond just code. By taking my test smell seriously, I wound up with a design where I completely factored the database setup garbage out of my code, which is clearly a good thing. Now, I’ve been around the block enough times that this would have happened regardless, but it was interesting to note that making a testability/clean-code decision and sticking to my guns teased out a macroscopic design improvement.

By

Announcing Disqus for DaedTech

I just wanted to post a note to say that I’m switching from the default WordPress commenting system to using Disqus. So, if you’ve made comments in the past and don’t see them now, that’s because Disqus is in the process of importing all non-Disqus comments stored on my server. Hopefully comments will re-appear at some point in the next day or so.

If you have any problems with the new system, please feel free to email me or to post a comment here.

Thanks!

By

JUnit for C# Developers 7 – Law of Demeter and Temporal Mocking

Last time in this series, I posted about a good reminder of a couple of principles of good object oriented design (respecting the Law of Demeter and avoiding static method invocations as much as possible). Today, I’m going to double back on this consciously a bit to explore some more possibilities in JUnit. Don’t worry – I will fix the design in subsequent posts.

Goals

Today, I’d like to accomplish the following:

  1. Have a mock change with each invocation
  2. Mock a low of demeter violation in as little code as possible

To the Code!

If you’ve followed my last few posts, you’ve noticed that I setup MongoDB. So, logically, the next step is connecting to it with my application, and the next step after that is mocking this connection so that I can unit test the logic (well, since I’m following TDD, technically the mocking comes first). Through trial and error in a throw-away piece of code, I discovered that I could access my database as so:

public class HouseServiceMongoImpl implements HouseService {

	private Mongo _mongoConnection;
	
	private DB _database;
	
	public HouseServiceMongoImpl() {
		try {
			_mongoConnection = new Mongo( "192.168.2.191" , 27017 );
			_database = _mongoConnection.getDB("daedalus");
		} catch (UnknownHostException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		} catch (MongoException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
	}

	@Override
	public House getHouse() {
		DBCollection myCollection = _database.getCollection("house");
		DBCursor myCursor = myCollection.find();
				
		List<String> myStrings = new ArrayList<String>();
		
		while(myCursor.hasNext()) {
			myStrings.add(myCursor.next().get("room").toString());
		}
		return new HouseModel(myStrings);
	}
	
}

Notice here that I’ve inlined instantiation of my dependencies in a form bad for testability. I won’t repeat this mistake in the actual, TDD version. This was just for me to see what actually worked and provide some context for what I’m trying to do. In my real class, RoomServiceMongoImpl, I’m going to be returning a collection of room objects, rather than a house object. So, without further ado, on to the TDD. To get things started, I wrote the following test:

public class RoomServiceMongoImplTest {

	private static RoomServiceMongoImpl BuildTarget(DB database) {
		
		return new RoomServiceMongoImpl(database);
		
	}
		
	public static class getAllRooms {
		
		/*
		 * If the driver gives us back nothing for this collection, return an empty collection to clients
		 */
		@Test
		public void returns_empty_list_when_database_get_collection_returns_null_for_house() {
			
			DB myMockDatabase = mock(DB.class);
			Mockito.when(myMockDatabase.getCollection(Mockito.matches("house"))).thenReturn(null);
			RoomServiceMongoImpl myService = BuildTarget(myMockDatabase);
			
			assertEquals(0, myService.getAllRooms().size());
		}
	}
}

This allowed me to create the room service and inject into it the DB object, since as far as I can tell, I gain nothing by injecting the Mongo object. So, my first test is that we get back a null object in the form of an empty list when the MongoDB collection requested is empty. This makes sense, as it means we have no house (and thus no rooms).

When doing TDD, I’ve gotten in the habit of teasing out any control flow by varying up the return value of the method. So, the first test returns an empty collection. The next test will return count of one. The next will return a bunch. The one after that may go back to zero but in a different set of circumstances. This sort of progression allows me to make progress while covering all my bases. So, here, I’m going to do the setup necessary for a list with a single item to be returned.

But… as it turns out, this isn’t trivial. From my database, I’m getting a collection, which I’m asking for a database cursor, which I’m asking in a loop for an object called “next”, which I’m then asking for my actual room’s string value (its name). This has the Law-of-Demeter violating form db.getCollection().find().next().get(string). Ugh. That’s a lot of mocking, and creating mocks this way is the smell of LOD violations in your code. But, let’s suck it up for now and create the mocks:

@Test
public void returns_list_with_one_item_when_db_returns_matching_cursor() {

	DBObject myMockDatabaseObject = mock(DBObject.class);
	Mockito.when(myMockDatabaseObject.get(Mockito.anyString())).thenReturn("asdf");
	
	DBCursor myMockCursor = mock(DBCursor.class);
	Mockito.when(myMockCursor.next()).thenReturn(myMockDatabaseObject);
			
	DBCollection myMockCollection = PowerMockito.mock(DBCollection.class);
	Mockito.when(myMockCollection.find()).thenReturn(myMockCursor);
	
	DB myMockDatabase = mock(DB.class);
	Mockito.when(myMockDatabase.getCollection(Mockito.matches("house"))).thenReturn(myMockCollection);
	
	RoomServiceMongoImpl myService = BuildTarget(myMockDatabase);
	
	assertEquals(1, myService.getAllRooms().size());
}

This was particularly annoying to write since it cost me a good bit of frustration trying to figure out why I was getting cryptic error messages about the wrong return type. Turns out it was because the collection.find() method is final, requiring me to use PowerMockito (this stack overflow post ended my suffering and I voted the poster up both for the question and his self-answer as my way of saying thanks). But, I was done, and the following class made the test pass:

public class RoomServiceMongoImpl implements RoomService {

	/**
	 * This is the MongoDB database object we'll use for our queries
	 */
	private DB _database;
	
	/**
	 * Dependency injected constructor
	 * @param database
	 */
	public RoomServiceMongoImpl(DB database) {
		_database = database;
	}

	@Override
	public Collection<Room> getAllRooms() {
		Collection<Room> myRooms = new ArrayList<Room>();
		
		if(_database.getCollection("house") != null && _database.getCollection("house").find().next().get("bkag") != null) {
			myRooms.add(new RoomModel("blah", null, 'A'));
		}
		return myRooms;
	}
}

Hideous, with all of those LOD violations, but functional. So now, the next step is to factor the code toward something less obtuse with my TDD, and an excellent way to do this is the case of two strings in the database. But, in order to make that happen, the cursor mock’s next() method has to first return a DB object and then return null. It just so happens that this is goal number one, and it can be achieved as follows:

DBCursor myMockCursor = mock(DBCursor.class);
Mockito.when(myMockCursor.next()).thenReturn(myMockDatabaseObject).thenReturn(myMockDatabaseObject).thenReturn(null);
Mockito.when(myMockCursor.hasNext()).thenReturn(true).thenReturn(true).thenReturn(false);

Note the chained returns calls. I took what I had in the previous test and turned it into this code, and then amended the expected size of the collection returned to be 2 using hasNext() and next(). I also had to go back and add similar code to the previous test for completeness sake. With this in place, the CUT getAllRooms() method became:

@Override
public Collection<Room> getAllRooms() {
	Collection<Room> myRooms = new ArrayList<Room>();
	DBCollection myCollection = _database.getCollection("house");
	if(myCollection != null) {
		DBCursor myCursor = myCollection.find();
		if(myCursor != null) {
			while(myCursor.hasNext()) {
				myRooms.add(new RoomModel(myCursor.next().get("asdf").toString(), null, 'A'));
			}
		}
	}
	return myRooms;
}

Man, that’s ugly. But, it works, and that’s what matters. We’re going to pretty this up and make it more robust later during the “refactor” cycles of red-green-refactor. So, on to goal number two, which is to compact the mock setup code somewhat. Granted, the pain of this setup indicates a design smell and there is a good argument that we should not deoderize these test smells. But, given that this is partially an endeavor for me to learn and blog about the mocking tools available with JUnit, I’m going to make an exception, with a promise to myself that I will pay off this loan of technical debt later by doing what I can to cut down on the database boilerplate I don’t need in this class. I came up with this:

@Test
public void returns_list_with_one_item_when_db_returns_matching_cursor() {

	DBObject myMockDatabaseObject = mock(DBObject.class);
	Mockito.when(myMockDatabaseObject.get(Mockito.anyString())).thenReturn("asdf");
			
	DB myMockDatabase = mock(DB.class);
	
	Mockito.when(myMockDatabase.getCollection(Mockito.matches("house"))).thenReturn(PowerMockito.mock(DBCollection.class));
	Mockito.when(myMockDatabase.getCollection(Mockito.matches("house")).find()).thenReturn(mock(DBCursor.class));
	Mockito.when(myMockDatabase.getCollection(Mockito.matches("house")).find().next()).thenReturn(myMockDatabaseObject).thenReturn(null);
	Mockito.when(myMockDatabase.getCollection(Mockito.matches("house")).find().hasNext()).thenReturn(true).thenReturn(false);
	
	RoomServiceMongoImpl myService = BuildTarget(myMockDatabase);
	
	assertEquals(1, myService.getAllRooms().size());
}

But, it throws a null reference exception on the first find() setup (second Mockito.when() line) and it doesn’t really save any code anyway, so I see no advantage to going this route. My instincts were against it anyway, and since it doesn’t work as fluidly as I hoped it might (I feel like I could probably get this fluent-style to work with enough persistence) and doesn’t save us code, forget it. Nothing ventured, nothing gained. I’ll have to be satisfied that this fluent-chaining appears possible, but might be a stone better left un-turned in favor of going back and cleaning up my code by eliminating LOD violations and generally seeking not to have them.

These posts are getting a little fewer and further between now as I’m less frequently blazing new trails in my work. I will probably check back in on this line of posts when I factor the design of this a bit, as kind of a MongoDB/JUnit fusion post.

By

Home Automation: The Socket Rocket

Introducing the Socket Rocket

In the last post of this series, I covered adding a wall switch to a home automation network. Today, I’m going to talk about adding the “socket rocket”, or LM15A. It is commonly known as “Socket Rocket” presumably due to it fitting in a light bulb socket and I guess vaguely looking like a rocket or something. As you can probably deduce from its image, the socket rocket works by having a light bulb screwed into it and then being screwed into a normal light socket itself.

The socket rocket confers a few benefits on users. First, unlike the lamp module we covered in the first post and like the switch we covered in the most recent post, the socket rocket does not announce its working with a loud clicking sound. It turns its bulb on and off silently. Secondly, the socket rocket will work regardless of whether a lamp is controlled by the lamp’s switch, a wall switch, or both. This makes it quite versatile and extremely easy to install, requiring neither replacement of a wall switch, nor even location of the outlet into which a light is plugged. A third advantage is that a series of socket rockets on multi-light lamps allows more granular control over the lights than is possible without home automation. For example, if you had an overhead bedroom lamp with three light-bulbs, you could use socket rockets to have a setting where two of the bulbs were off and one was on. And, finally, the socket rocket can allow control over lights that are generally hard or annoying to reach. This may seem obvious, but an interesting possibility opened up here is the ability to place a lamp in some hard to reach location (up high or on a porch, for example) and control it remotely.

On the flip side, the socket rocket also has a few downsides as compared to the switch and the lamp module. Unlike these modules, it does not work in conjunction with manual operation. If you unplug or manually turn off a lamp into which a socket rocket is plugged, the socket rocket will forget what on/off state it was in and revert to off. So, socket rocket should only be used in lamps that will exclusively be controlled by the home automation network. Secondly, the socket rocket does not allow dimming nor does it respond to dimming commands (though this does allow for the not-officially-approved but interesting ability to use socket rocket with CFL bulbs). And, third, the socket rocket is not programmable with the manual dials that I’ve discussed before — I will cover how to program them below.

Setting up the Socket Rocket

So, assuming that you’ve read through the pros and cons and decided that you have use for a socket rocket, let’s buy one and set it up. For this and other home automation needs, I usually search ebay for sales home automation stores. At the time of writing, socket rockets typically run around $20, though I have seen them for as little as $10 at times. Personally, I tend to buy them from ebay stores — people who sell X10 equipment as part of a commercial business. This generally ensures smooth, easy transactions and properly working devices. However, you can probably get them cheapest if you buy them used from other non-store ebay users. My preference may not be the same as yours.

When the socket rocket arrives, it is time for you to program it. The first thing that you want to do is set the house and unit code of the socket rocket. As I mentioned earlier, this won’t be accomplished the way you did it with the lamp module and wall switch because the socket rocket has no dials. Instead, you’re going to use your keychain remote. To complete the programming task, you’re going to need your keychain remote, your lamp transceiver module, a spare lamp (with bulb), and your socket rocket.

Plug the socket rocket into your lamp and the bulb into the socket rocket. Make sure your lamp is turned OFF, and plug it into the same wall outlet as your transceiver module. You can plug it in wherever you have the transceiver module set up already, or you can move them both to some spare outlet pair. This is all just temporary for the setting of the house and unit code. Now, decide which house code (letter) or unit code (1 or 9) you want the socket rocket lamp on. It defaults to A1, so let’s choose A9 to make sure we’re programming it correctly. Once the lamp and transceiver are both plugged in, turn the lamp on (the bulb will not light) and, within thirty seconds, click the right “on” on the remote three times. The socket rocket bulb should light up, which tells you that it has been properly programmed.

The theory behind the scenes is that whenever the socket rocket first receives power, it goes into “programming mode” and sets itself to any house and unit code combination it receives an on signal for three times. So, plugging it in and hitting “on” from the remote corresponding to A9 three times tells the socket rocket that you want to program it for A9. From here forward, the socket rocket will respond to A9 when you click it on or off.

Operation

As I mentioned briefly earlier, the socket rocket should only be controlled from your keychain. If you unplug the lamp or turn it off manually with the switch, when you turn it back on, the socket rocket will not remember that it was on last time, and you’ll have to turn it on again. So, let’s say that you plug the socket rocket into a light in your closet operated by a pull cord and turn it on with your keychain. If you turn it off by pulling the pull cord instead of with the remote, you will not be able to turn the lamp back on with your remote. And, when you go to turn it back on by pulling the cord, it will not illuminate because the socket rocket has forgotten that you wanted it on (it will briefly illuminate to show you it has power and then turn off). You will have to pull the cord and then turn it on with the key chain. Because of how confusing this is, I strongly recommend using the socket rocket only on a light that you never turn on or off any other way.

When bulbs burn out, you can replace them by screwing them into and out of the socket rocket while leaving it in place. There is no need to do anything with it at that time. If you want to reprogram it, simply repeat the original programming process but with a different house and/or unit code.

Acknowledgements | Contact | About | Social Media