DaedTech

Stories about Software

By

Bridge

Quick Information/Overview

Pattern Type Structural
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Somewhat complex to grasp, moderate to implement.

Up Front Definitions

There are no special definitions that I’ll use here not defined inline.

The Problem

The iconic example of a situation in which the Bridge pattern is applicable is modeling a wall switch and the thing that it controls. Let’s say that you start out with a set of requirements that says users want to be able to turn an overhead light on and off. Skipping right to implementation, you do something like this:

This is fine, and you deliver it to production, and everyone is happy. But in the next iteration, a new requirement comes in that users want to be able to use a rocker switch or a push-button switch. And just as you’re getting ready to implement that, you’re also told that you need to implement a rotary switch (like a door deadbolt, you turn this to two different positions). Well, that’s fine, because you have just the trick up your sleeve in a polymorphic language: the interface!

Notice the change in “FlipSwitch” to the more abstract “OperateSwitch()”. This allows for implementations where the switch is not “flipped,” such as push-button. (I’m not really a stickler for English semantics, but I suppose it’s debatable whether or not a rotary switch’s operation would be a “flip.”)

Now you’re all set. Not only does your overhead lamp operate with several different kinds of switches, but you’re following the Open/Closed Principle. Marketing can come in and demand a toggle switch, a fancy touchpad, or even a contraption that you smack with a hammer, and you handle it by writing and unit-testing a new class. Everything looks good.

Except in the next iteration, those crafty marketing people realize that a switch could operate other kinds of electronics, like fans, appliances, and space heaters. So they tell you that they want the switch now to be able to turn on and off computers as well as overhead lamps. That’s a challenge, but you’re up to it. You have your interface, so you’ll just add some more polymorphs and refine the abstraction a bit:

Alright, that code is going to work, but you’re a little leery of the fact that, after renaming IOverheadLamp to ISwitchableAppliance, the new classes you created were simply a result of copying and pasting the lamp classes and changing “lamp” to “computer.” You should be leery. That’s a design/code smell (duplication–don’t repeat yourself!) But whatever–you’re behind schedule and people are giving you a hard time. You can refactor it later.

Now the next iteration starts, and marketing is wildly pleased with your computer/light switch. They want to be able to control any sort of appliance that you can imagine–the aforementioned space heaters, ceiling fans, refrigerators, plug-in light sabers, whatever. Oh, and by the way, computers don’t necessarily turn on when you flip the switch for the outlet that they’re plugged into, so can you have the computer not turn on when you turn the switch on, but turn off when you turn the switch off? Oh, and remember that hammer-operated switch? They want that too.

Well, you’re hosed. You realize that you’re going to have to copy and paste your three (four, with the hammer thing) classes dozens or hundreds of times, and you’re going to have to change the behavior of some of them. All of the computer ones, for instance, don’t automatically turn on when you activate the switch. But clients of your code don’t know that, so they’re now going to have to try to cast the ISwitchableAppliance to one of the classes ending in Computer to account for its special behavior. This just got ugly, and fast.

So, What to Do?

In terms of realizing what you should have done or what can be done, the first important thing to realize is that you’re dealing with two distinct concepts rather than just one. Consider the following two diagrams:

Before
Our current object model

After
Another way of looking at things

The first diagram is what we were doing. Conceptually, every appliance consists not only of the appliance itself, but also of the switch that turns it on or off (or does something to it, more generally). In the second diagram, we’ve separated those concerns, realizing that the appliance itself and the switching mechanism are two separate entities capable of varying independently.

At this point, I’ll introduce the Bridge pattern in earnest, in a UML diagram, compliments of Wikipedia:

Wiki bridge

In looking at this version of it, we see some new concepts introduced: abstraction and implementor. I’ll explain the theory behind this in the next section, but suffice it to say for now that our abstraction is the switch and our implementor is the appliance. As the diagram depicts, our clients are going to deal with the abstraction, and they’re going to do so by handing it a reference to the implementor that they want. More concretely, our clients are going to instantiate some switch and tell it what device they want it to control.

By way of code, let’s consider what we had before marketing buried us with the latest barrage of requirements–three switch types and two appliances. The only new requirement that we’ll consider up front is the one suggesting that computers need to behave differently than other appliances when the switch controlling them is toggled. First, we’ll define the API for the switch (abstraction) and the appliance (implementation):

Notice that because our implementation up until this point has been pretty simple, the interfaces look almost identical (minus the HasPower, TogglePower() in IAppliance, which we’ve added with our new requirement in mind). Both the appliance and the switch have the concept of on and off, more or less. This similarity was what allowed us, up until now, to operate under the (faulty) assumption that these two concepts could easily be combined. But we ran into difficulty when we saw that the abstraction and the implementation needed to vary separately (more on this later).

Where we got into trouble was with the concept of toggle power and distinguishing between having power and being on. That makes sense for an appliance (is it turned off or unplugged?) but not for a switch, which only knows that it has two positions and that it’s in one of them. So, with the introduction of this new requirement, we can no longer operate under even the pretense that mashing these two concepts together is a reasonable thing to do.

The key thing to notice here is that ISwitch has a reference to an IAppliance. This means that clients instantiating a switch can hand it an appliance on which to operate. But we’ll look at client code later.

Now let’s consider the implementors of IAppliance:

Here, notice the distinction in the behavior of TogglePower(). For lamps, supplying or removing power supplies and removes power but also turns the lamp on and off, respectively. For computers, removing power turns the computer off, but supplying power does not turn it on. (Some other client of the API will have to do that manually, ala real life.) Now that we’ve decoupled the actual implementation of the concept of appliances being on and off from the abstraction of initiating that implementation, our appliances can change how they behave when power is supplied or removed. We could introduce a new appliance, “BatteryPoweredAlarmClock,” that didn’t turn off when power was cut (or, more accurately, kicked off some long running timer that would turn it off at some point later).

Here are the implementations of the ISwitch interface:

Notice that the individual implementors still retain their own unique properties. Rocker still has up or not up, and Rotary still has its position. But the things that all share are implementations of the OperateSwitch() method, the IsOn property (it might be more accurate to rename this “IsInOnPosition” to avoid confusion with the appliance On/Off state, but I already typed up the code examples), and the IAppliance property. In addition, all of them operate on the appliance’s TogglePower() method.

This last distinction is important. Switches, in concept, can only supply power to an appliance or take it away–they don’t actually switch it on and off. It is up to the appliance to determine how it behaves when power is supplied or removed. As this implementation is continued, it is important to remember this distinction. I could have omitted ToggleOn() from the appliances’ interface if this code were in a vacuum because the switch has no use for it. However, assuming that we’re modeling something a little broader (like, say, my home automation pet project), we clearly want people in houses to be able to turn on their computers and televisions. The switch is unlikely to be the only consumer of IAppliance.

Finally, let’s consider how we would use this thing:

Here’s an example application, cutting out any considerations like factory classes or methods. We have some method that takes a switch as input and returns a computer in the state the method put it in. Conceptually, this is pretty simple–we’ve done all the hard work. You just take your switch, set it to operate some appliance, and then operate away.

A More Official Explanation

Earlier, I mentioned the notion of decoupling an abstraction from its implementation. This is the backbone of this pattern, but it might be a little confusing in terms of what it’s actually trying to communicate. I’d imagine some reading will think, “Isn’t decoupling an abstraction from its implementation what an interface contract does? Why the separate pattern?”

To answer that first question, I’ll say, “yes.” Defining an interface says, “Any implementors of this interface will define a method that takes these parameters and returns this type of value, and the details are up to the implementor to sort out. The client doesn’t care how–just get it done.” So, in a manner of speaking, the method signature is the abstraction and the implementation is, well, the implementation.

The problem is that this is a code abstraction rather than a modeled abstraction. That is, a method signature is a contract between one developer and another and not between two different objects. A switch isn’t an interface to a device in C#–it’s an interface to a device in the real world. A switch has its own properties, operations, and state that needs to be modeled. It can’t be reduced in code to a method signature.

So what are we getting at when we say that we want to decouple an abstraction from an implementation? Generally speaking, we’re saying that we want to decouple a thing of some sort from operation performed on that thing. In our example, the thing is an appliance, and the operations performed on it are supplying or removing power. The switch (abstraction) is a separate object with its own properties that needs to be modeled. And what’s more, we can have different kinds of switches, so long as all of the switches perform the needed operation on appliance.

In general, the Bridge pattern represents a scenario like a simple sentence with subject, verb, object: Erik eats apple. We could code up Erik and code up an apple. But then maybe Erik eats orange. So we define a fruit base class or interface and model the world with “Erik eats fruit.” But then maybe Joe also eats fruit, so we need to define a person class and further generalize to “Person eats fruit.” In this case, our abstraction is person, and our implementation is fruit. Person has a fruit property and performs “Eat” on it. The one thing that never changed as we continued generalizing was “eat”–it was always “A eats B.” Going back to our switch/appliance paradigm, we notice the same thing: “A toggles power to B.”

The decoupling allows us to have different subjects and for different objects to behave differently during the course of the verb operation. So if I use the Bridge pattern to model “Person eats Fruit,” it isn’t hard to specify that some fruits leave a pit or a core and some don’t following the eat operation. It isn’t hard to have some people get indigestion as a result of eating fruit. And neither one of those new requirements merits a change to the other one. Fruit doesn’t change if it gives people indigestion, and a person isn’t changed when a piece of fruit they eat has a pit.

Other Quick Examples

The Bridge pattern has as many uses as you can conceive of “A verb B” pairs that might have some variance in the A and B, so I’ll list a handful that lend themselves well to the pattern.

  1. You have images stored in different formats on disk (bmp, jpg, png, etc) and you also have different ways of rendering images (grayscale, inverted, etc)
  2. You’re performing a file operation on a file that may be a Windows, Mac, or Linux file
  3. You have different types of customers that can place different types of orders
  4. You have a GUI that displays buttons, text boxes, and labels differently depending on different user themes

A Good Fit–When to Use

The Bridge Pattern makes sense to use when you have two objects participating in an action and the mechanics of that action will have different ramifications for different types of the participating objects. But, beyond that simple distinction, it makes sense when you are likely to need to add participants. In our example, different appliances behaved differently when power was supplied or removed, and different switches had different behaviors and properties surrounding their operation. Additionally, it seemed pretty likely after the first round or two of marketing requests that we’d probably be adding more “subjects” and “objects” in perpetuity.

This is where the Bridge pattern really shines. It creates a situation where those types of requirements changes mean adding a class, rather than changing existing ones. And, it obviates duplicating code with copy and paste.

I’d summarize here by saying that the Bridge pattern is a good fit when you are modeling “A action B,” when A and B vary in how the action affects each one, and when you find that coupling A and B together will result in duplication. Conversely, it might be a good pattern to look at when you’re faced with the prospect of a combinatorial explosion of implementations as requirements change. That is, you can tell that A and B should be decoupled if you find yourself with classes like A1B1, A1B2, A2B1, A2B2.

Square Peg, Round Hole–When Not to Use

Don’t use this pattern if A and B are really appropriately coupled. If, in your object model, switches were actually wired to appliances, our effort would be unnecessary. Conceptually, it wouldn’t be possible to use one appliance’s switch on another–each appliance would come with one. So you define the different types of switches, give each appliance a concrete switch, and define “TurnOn” and “TurnOff” as methods on the appliance. The Bridge pattern is meant to be used when real variance occurs between the actors involved, not to be used any time one thing performs an operation on another.

There’s always YAGNI to consider as well. If the requirements had stayed as they were early in our example–we were only interested in lights–the pattern would be overkill. If you’re writing a utility specifically to model the overhead lights in a house, why define IAppliance and other appliances only to let them languish as dead code? Apply the bridge when you start getting actual variance in both objects, not when you just think it might happen at some point. In the simplest application, having to supply Switch with your only appliance, “OverheadLamp,” is wasteful and confusing.

Finally, Bridge has a curious relationship with Adapater, which I covered earlier. Adapater and Bridge have conceptual similarities in that they both link two objects and allow them to vary independently. However, Adapter is a retrofit hack used when your hands are tied, and Bridge is something that you plan when you control everything. So, don’t use (or try to use) Bridge when you don’t control one of the participant hierarchies. If, say, “RockerSwitch” et. al. were in some library that you didn’t control, there’s no point bothering to try a bridge. You’d need to adapt, rather than bridge, the switches to work in your implementation.

So What? Why is this Better?

So, why do all this? We’ve satisfied the requirement about computers behaving differently when the switch is flipped, but was it worth it? It sure was. Consider how the new requirements will now be implemented. Marketing wants 100 new appliances and a new switch. Sure, it’s a lot of work–we have to code up 101 new classes (100 for the appliances and 1 for the switch). But in the old, mash-up way, we’d have to code up those 100 new appliance classes, copy and paste them 3 times each, code up a new switch class, and copy and paste it 102 times, for a total of 403 new classes. And what if we made a mistake in a few of the appliance classes? Well, we’d have to correct each one 4 times because of all the copy/pasted code. So, even if the idea that not duplicating your work doesn’t sell you, the additional development and maintenance should.

By

WPF Combo Boxes

I thought I’d put together a quick post today on something that annoyed me and that I found unintuitive: binding with the WPF combo box. I’m doing some development following MVVM, and a situation came up in which I had a view model for editing a conceptual object in my domain. Let’s call it a user for the sake of this post, since the actual work I’m doing is nominally proprietary.

So, let’s say that user has a first name and a last name and that user also has a role. Role is not an enum or a literal, but an actual, conceptual reference object. In the view model for a customer edit screen, I was exposing a model of the user domain object for binding, and this model had properties like first name and last name editable via text box. I now wanted to add a combo box to allow for editing of the role by selecting one choice from a list of valid roles.

Forgetting the code for the presentation tier, I did this in the XAML:

Now, I’ve had plenty of occasions where I’ve exposed a list from a view model and then a separate property from the view model called “Selected Item.” This paradigm above would faithfully set my SelectedItem property to one of the list members. But here, I’m doing something subtly different. The main object in the view model–its focus–is UserModel. That is, the point of this screen is to edit the user, not roles or any other peripheral data. So, what I’m actually doing here is trying to bind a reference in another object to one of the items in the list.

What I have above didn’t work. And after a good bit of reading and head scratching, I figured out why. SelectedItem tries to find whatever it’s bound to in the actual list. In the case where I have a list in my view model and want to pick one of the members, this is perfect. But in the case where the list of roles contains objects that are distinct from the user’s role reference, this doesn’t work. The reason it doesn’t work, I believe, is that SelectedItem operates on equality. So if I were to create a list of roles and then assign one of them to the user model object, everything would be fine, since object equals defaults to looking for identical references. But in this case, where the list of roles and the user’s role are created separately (this is done in the data layer in my application) and have nothing in common until I try to match my user role to the list of roles, reference equals fails.

As an experiment, I tried the following:

After I put this code in place, viola, success! Everything now worked. The combo box was now looking for matching IDs instead of reference equals. I packed up and went home (it was late when I was doing this). But on the drive home, I started to think about the fact that two roles having equal IDs doesn’t mean that they’re equal. ID is an artificial database construct that I hide from users for identifying roles. ID should be unique, and I have a bunch of unit tests that say that it is. But that doesn’t mean that equal IDs means conceptual equality. What if I somehow had two roles with the same ID but different titles, like, say, if I was allowing the user to edit the role title. If for some reason I wanted to compare the edited value with the new one using Equals(), I wouldn’t want the edited and original always to be equal simply because they shared an ID.

I figured I could amend the equals override, but I’m not big on adding code that I’m not actually using, and this is the only place I’m using Equals override. So I went back to the drawing board and read a bit more about the combo box. What I discovered was a couple of additional, rather unfortunately named properties: SelectedValue and SelectedValuePath. Here is what the amended working version looked like without overriding Equals:

ItemsSource is the same, but instead of a SelectedItem, I now have a “SelectedValue” and a “SelectedValuePath”. SelectedValue allows you to specify the binding target by a property of it, and SelectedValuePath allows you to specify which property of the members of ItemsSource should match SelectedValue.

So what the above is really saying is “I have a list of Roles. The role that’s selected is going to be whichever role in the list has an ID property that matches UserModel’s role ID.” And by default, when you change which value is selected, the UserModel’s “RoleId” gets updated with the new selection.

This actually somewhat resembles what I remember from doing Spring and JSP many moons ago, but there’s a little too much rust and probably too many JDKs and whatnot released between then and now for me to know that it’s current. When you actually get down to the nitty gritty of what’s going on, it is intuitive here. I just think the control’s naming scheme is a bit confusing. I would prefer something that indicated the relationship between the item and the list.

But I suppose lack of familiarity always breeds confusion, which in turn breeds frustration. Maybe now that I took the time to understand the nitty gritty instead of just copying what had worked for me in previous implementations, I’ll warm up to the naming scheme.

By

Getting Started With Android Development

I posted some time back about developing for Android and getting set up with the Android SDK, the Eclipse plugin, and all that. For the last six months, I haven’t really had time to get back to that. But now I’m starting to delve into Android development in earnest, so this (and potentially other upcoming posts) is going to be about my experience starting to write an Android application. I think I can offer some interesting perspective here. I am an experienced software developer with breadth of experience as well as depth in some technologies, but I am completely new to Android SDK. Hopefully, my experiences and overcome frustrations can help people in a similar position. This also means that you’d be learning along with me–it’s entirely possible that some of the things I post may be wrong, incomplete, or misguided in the beginning.

This post kind of assumes that your knowledge is like mine. I have a good, if a bit rusty from a year and a half of disuse, working knowledge of Eclipse and J2EE development therein. I’m also familiar with web development and WPF, so the concept of object-oriented plumbing code with a declarative markup layout for the view is quite familiar to me.

Notes about Setup

Just as a bit of background, I do have some things set up that I’m not going to bother going through in this particular post. I have Eclipse installed and configured, running on a Windows XP Pro machine. I also have, at my disposal, a Samsung Epic 4G running Android 2.3. (I forget the name of the food that accompanies this version, and, to be perfectly honest, something strikes me as sort of lame about naming your releases after desserts. Different strokes, I guess.) I also have installed ADB and the drivers necessary for connecting my computer to my phone. And finally, I have the Android virtual machine emulator, though I think that just comes with the Eclipse SDK plugin or something. I don’t recall having to do anything to get that going.

Creating a new Android project

One of the things that’s difficult when you’re new to some kind of development framework is separating what actually matters to your basic activities and what doesn’t. Any framework-oriented development, in contrast to, say, writing a C file and compiling it from the command line with GCC, dumps a lot of boilerplate on you. It’s hard to sort out what actually matters at first, especially if your “training” consists of internet tutorials and trial and error. So I’m going to note here what actually turned out to matter in creating a small, functional app, and what didn’t so far.

When you create a new project, you get a “SRC” folder that will contain your Java classes. This is where you’re going to put a class that inherits from “Activity” in order to actually have an application. I’ll get to activities momentarily. There’s also a “Gen” folder that contains auto-generated Java files. This is not important to worry about. Also not important so far are the “assets” folder and the Android2.1-update1 folder containing the Android jar. (Clearly this is quite important from a logistical perspective, as the Android library is necessary to develop Android apps, but it makes no difference to what you’re actually doing.)

The res folder is where things get a little interesting. This is where all of the view layer stuff goes on. So if you’re a J2EE web developer, this is the equivalent of the folder with your JSPs. If you’re a WPF/Silverlight developer, this is the equivalent of a folder containing your XAML. I haven’t altered the given structure, and I would’t suggest doing it. The layout subfolder is probably the most important, as this is where the actual view files defining UI components in XML go. In other subfolders, you’ll find places where your icon is defined and where there are centralized definitions for all display strings and attributes. (I haven’t figured out why it’s necessary to have some global cache of strings somewhere. Perhaps this is to take advantage of some kind of localization/globalization paradigm in Android, meaning you don’t have to translate yourself for multi-lingual support. Or maybe I’m just naively optimistic.)

The other thing of interest is the AndroidManifest.xml. This contains some application-wide settings that look important, like your application’s name and whatnot. The only thing that I’ve bothered so far to look at in here is the ability to add an attribute to application called “android:debuggable=”true”. Apparently, that’s needed to test out your deployable on your device. I haven’t actually verified that by getting rid of the attribute, but I seem to recall reading that on the Android Dev forum.

Those are all of the basic components that you’ll be given. The way that Android development goes on the whole is that it is defined in terms of “Activities.” An activity can loosely be thought of as a “screen.” That is, a very basic application will (probably, unless it’s purely a background service) consist of one activity, but a more complex one is going to consist of several and perhaps other application components like services or content providers. Each activity that you define in your application will require a class that extends the “Activity” class and overrides, at least, the “OnCreate(Bundle)” method. This is what you must supply to have a functioning application–at the very least, you must set your activity’s content.

To summarize, what you’re going to need to look at in order to create a hello world type of app on your phone is the Java file you’re given that inherits from activity, the main.xml file in layout, and the manifest. This provides everything you need to build and deploy your app. Now, the interesting question becomes “deploy it to where?”

Deployment – Emulator and Phone

I quickly learned that the device emulator is very, very slow. It takes minutes to load, boot, install your deployable in the virtual environment. Now, don’t get me wrong, the VM is cool, but that’s fairly annoying because we’re not talking about a one-time overhead and quick deployment from there. It’s minutes every time.

Until they optimize that sucker a little, I’d suggest using your phone (or Android tablet, if applicable, but I’m only going to talk about the phone) if it’s even remotely convenient and assuming that you have one. As I discovered, when you run your Eclipse project as an Android app, assuming you’ve set everything up right, the time between clicking “run” and seeing it on your phone is a couple of seconds. This is a huge productivity improvement and I didn’t look back once I started doing this.

Well, let me qualify that slightly. The first time I did it, it was great. The second time I deployed, I got a series of error messages and a pop up asking me to pick which deployment environment I wanted: the emulator or my phone. I wanted my phone, but it was always shown as “offline.” To counter this problem, I discovered it was necessary to go on the device itself and, under “Settings,” set it never to sleep when connected. Apparently, the phone going to sleep sends the ADB driver into quite a tizzy. If you have hit this, just changing the setting on your phone won’t do the trick. You’ll need to go into the platform-tools directory of wherever you installed the Anrdroid SDK and run “adb.exe kill-server” followed by “adb.exe start-server”. For you web devs out there, think of this as clicking on the little tomcat dude with the stop and then the little tomcat dude. :)

Now with this set up, you should be able to repeatedly deploy, and it’s really quite impressive how fast this is considering that you’re pushing a package to another device. It’s honestly not noticeably different than building and running a desktop app. The server kill and start trick is useful to remember because there is occasional weirdness with the deployment. I should also mention a couple of other things that didn’t trip me up, but that was because I read about them in advance. To debug on your phone, the phone’s development settings need to be configured for it. In your phone’s settings, under “Applications,” you should check “Allow Installation of Non-Market Applications” and, under “Debugging,” check “USB Debugging”. (On my phone, this is also where you find “Stay Awake,” which caused the problem I mentioned earlier, but YMMV.)

Changing the Icon

One of the first things that you’ll notice is that your “Hello World” or whatever you’re doing deploys as the default little green Anrdoid guy. Personally, when I’m getting acquainted with something new, I like to learn about it by changing the most obvious and visible things, so very quickly I decided to see how changing the icon worked. In your “res” folder, there are three folders: “drawable-hdpi”, “drawable-ldpi”, and “drawable-mdpi”. A little googling showed me that these correspond to high, low, and medium resolution phones. Since Android developers, unlike their iOS counterparts, need to worry about multi-device support, they need to have a vehicle for providing different graphics for different phones.

However, at this point, I wouldn’t (and didn’t) worry about this. I took an image that I wanted to try out as my icon and put it into these folders, overwriting the default. Then, I built and I got some error about the image not being a PNG. Apparently, just renaming a JPG “whatever.png” isn’t enough to trick the SDK, so I opened it in MS Paint and did a “Save As,” selecting file type PNG. This did the trick. As best I can tell, your icon will be capped in size, so it’s better to err on the side of making it slightly too big.

Changing the App Name

When I set all this up last winter, I followed a tutorial that had me build an app called “SayHi”. I was trying to prove the concept of taking an Eclipse project and getting it to run something, anything, on my phone. As such, when I picked it back up and started playing with it, the app was still called “SayHi”. However, I don’t want this app to say hi. It’s actually going to be used to turn lights on and off in my house in conjunction with my home automation. So, I’d like to call it something catchy–something imaginative, you know, like “Light Controller.”

This is actually refreshingly easy for someone who has been working with Visual Studio and Clear Case–a tandem that makes renaming anything about as convenient as a trip to the DMV. Under “res->values,” open the “strings.xml” file. You’ll have tabs at the bottom to view this as raw XML or as a “Resources” file. Either way, the effect is the same. You’ll change the “app_name” string to the value that you want, and that’s it. On the next deployment to your phone, you’ll see your app’s new name. Pretty cool, huh? Two easy changes without any code or having an app that actually does anything, and it at least looks like a real app until you open it.

At this point, I should probably mention something that may not be familiar to you if you’re just getting started. In Eclipse and with the Android SDK, you have various options for how you want to view the XML files. The manifest one seems to have a lot of options. The strings one has the XML versus resource choice. From what I recall, this is a feature of Eclipse in general–I believe plugins can supply their own view of various file extensions. If you want to see what all is theoretically available for any file, XML or not, right click on it and expand “Open With.” That’ll show you all the options. It’s important to remember that even though you may get defaulted to some kind of higher level, GUI-driven editor, you always have the raw text at your disposal. Having said that, however, my experience editing layouts taught me that, for beginners, it’s a lot easier to use the SDK’s layout editor. You’ll save yourself some headaches.

This post has gotten pretty long, so I’ll save my adventures with layouts and GUI components until next post.

By

MVVM and Dialogs

For those familiar with the MVVM (Model, View, View-Model) pattern in .NET development, one conundrum that you’ve probably pondered, or at least read about, is what to do about showing a dialog. For a bit of background, MVVM is centered around the concept of binding from the View (XAML markup) to the “ViewModel”, which essentially acts as a staging platform for UI binding.

The ViewModel exposes the “Model” in such a way that the XAML can passively bind to it. It does this by exposing bindable properties (using INotifyPropertyChanged) and bindable commands (by using ICommand). Properties represent data, and commands represent actions.

The Problem

So, let’s say that you want a clean MVVM implementation which generally aspires to have no code-behind. Some people are more purist about this than others, but the concept has merit. Code-behind represents an active way of binding. That is, you have code that knows about the declarative markup and manipulates it. The problem here is that you have a dependency bugaboo. In a layered application, the layers should know about the one (or ones) underneath them and care nothing about the ones above them. This allows a different presentation layer to be plopped on a service tier or a different view to be skinned on a presentation tier. In the case of code-behind, what you have is a presentation tier that knows about its view and a view that knows about its presentation tier. You cannot simply skin another view on top because the presentation tier (read: code-behind) expects named elements in the declarative markup.

So, in a quest to eliminate all things code behind, you adopt MVVM and do fine when it comes to data binding and basic commands. But inevitably you want to open a window, and the WPF framework is extremely clunky and win-forms-like when it comes to this. Your choices, out of the box, are to have a named element in the XAML and manipulate it to show a dialog or else to have an event handler in the code behind.

What Others Have Suggested

The following are suggestions I’ve seen to address this problem and the reasons that I didn’t particularly care for them, in regards to my own situation. I did a fair amount of research before rolling my own.

  1. Just use the code behind (second response to post (3), though I’ve seen the sentiment elsewhere). I don’t really like this because I think that, when it comes to design guidelines, slippery slopes are a problem. If you’re creating a design where you’d like to be able to arbitrarily swap out groups of XAML files above a presentation layer, making this exception is the gateway to your skinnable plans going out the window. Why make exceptions to your guidelines if it isn’t necessary?
  2. Mediator Pattern. Well, this particular implementation lost me at “singleton,” but I’m not really a fan of this pattern in general for opening windows. The idea behind all of these is to create a situation where the View and ViewModel communicate through a mediator so as to have no direct dependencies. That is, ViewModel doesn’t depend on View–it depends on Mediator, as does the View. Generally speaking, this sort of mediation is effective at allowing tests and promoting some degree of flexibility, but you still have the same dependency in concept, and then you have the mediator code to maintain and manage.
  3. Behaviors. This is a solution I haven’t looked at too much and might come around to liking. However, at first blush, I didn’t like the looks of that extra XAML and the overriding of the Behavior class. I’m generally leery of .NET events and try to avoid them as much as possible. (I may create a post on that in and of itself, but suffice it to say I think the syntax and the forced, weakly typed parameters leave a lot to be desired.)
  4. Some kind of toolkit Blech. Nothing against the people that make these, and this one looks pretty good and somewhat in line with my eventual situation, but it seems like complete overkill to download, install, and maintain some third party application to open a window.
  5. IOC Container. I’ve seen some of these advertised, but the same principle applies here as the last one. It’s overkill for what I want to do.

I’ve seen plenty of other solutions and discussion as well, but none of them really appealed to me.

What I Did

I’ll just put the code and example usage in XAML here and talk about it:

That’s it. The things that are referenced here that you won’t have are worth mentioning but not vital to the implementation. SimpleCommand, from which “OpenWindowCommand” inherits, is a class that allows easier command declaration and use. It implements ICommand. It takes a delegate or a boolean for CanExecute() and a delegate for execution (that we override in OpenWindowCommand since we have a concrete implementation). Simple command is not generic–the generic is in OpenWindowCommand to allow strongly typed window opening (the presumption here being that you want to use this for windows that you’ve created and want to show modally).

The binding in the XAML to commands is to an object that represents a collection of commands. I actually have a CommandCollection object that I’ve created and exposed as a property on the ViewModel for that XAML, but you could use a Dictionary to achieve the same thing. Basically, “Commands[]” is just an indexed hash of commands for brevity in the view model. You could bind to a OpenWindowCommand property on your ViewModel.

So, basically, when the view model from which you want to open a window is being setup, you create an instance of OpenWindowCommand(YourViewModelInstance). When you do this, you passively expose a window open for binding. You’re saying to the view “execute this command to open window of type X with view model Y for backing.” Your view users are then free to bind to this command or not.

Why I Like This

First of all, this implementation creates no code-behind. No named windows/dialogs certainly, but also no event handlers. I also like that this doesn’t have the attendant complexity of most of the other solutions that I’ve seen. There’s no IMediator/Mediator, there’s no ServiceLocator, no CommandManager.Instance–none of it. Just one small class that implements one framework interface.

Naturally, I like this because this keeps the ViewModel/presentation layer View agnostic. This isn’t true out of the box here, but it is true in my implementation. I don’t declare commands anywhere in my ViewModels (they’re all wired in configurably by my home-rolled IOC implementation at startup). So the ViewModel layer only knows about the generic Window, not what windows I have in my view.

Room for Improvement

I think it would be better if the presentation tier, in theory, didn’t actually know about Window at all. I’m keeping my eyes peeled for a way to remove the generic parameter from the class and stick it on the Execute() method to be invoked from XAML. XAML appears to be very finicky when it comes to generics, but I have some hope that this may be doable in the latest release. I’ll re-post when I find that, because I’d love to have a situation in which the XAML itself could specify what kind of window to open as a command parameter. (I’m not in love with command parameters, but I’d make an exception for this flexibility.)

I’m also aware that this doesn’t address non-modal windows, and that there is currently no mechanism for obtaining the result from ShowDialog. The former I will address as I grow the application that I’m working on. I already have a solution for the latter in my code, and perhaps I’ll detail that more in a subsequent post.

By

Testable Code is Better Code

It seems pretty well accepted these days that unit testing is preferable to not unit testing. Logically, this implies that most people believe a tested code base is better than a non-tested code base. Further, by the nature of testing, a tested code base is likely to have fewer bugs than a non-tested code base. But I’d like to go a step further and make the case that, even given the same amount of bugs and discounting the judgment as to whether it is better to test or not, unit-tested code is generally better code, in terms of design and maintainability, than non-unit-tested code.

More succinctly, I believe that unit testing one’s code results in not just fewer bugs but in better code. I’ll go through some of the reasons that I believe that, and none of those reasons are “you work out more bugs when you test.”

It Forces Reasoning About Code

Let’s say that I start writing a class and I get as far as the following:

Pretty simple, but there are a variety of things right off the bat that can be tested. Can you think of any? If you don’t write a lot of tests, maybe not. But what you’ve got here is already a testing gold mine, and you have the opportunity to get off to a good start. What does Id initialize to? What do you want it to initialize to? How about first and last name? Already, you have at least three tests that you can write, and, if you favor TDD and don’t want nulls or zeroes, you can start with failing tests and make them pass.

It Teaches Developers about the Language

A related point is that writing unit tests tends to foster an understanding of how the language, libraries, and frameworks in play work. Consider our previous example. A developer may go through his programming life in C# not knowing what strings initialize to by default. This isn’t particularly far-fetched. Let’s say that he develops for a company with a coding standard of always initializing strings explicitly. Why would he ever know what strings are by default?

If, on the other hand, he’s in the practice of immediately writing unit tests on classes and then getting them to pass, he’ll see and be exposed to the failing condition. The unit test result will say something like “Expected: String.Empty, Was: null”.

And that just covers our trivial example. The unit tests provide a very natural forum for answering idle questions like “I wonder how x works…” or “I wonder what would happen if I did y…” If you’re working on a large application where build time is significant and getting to a point in the application where you can verify an experiment is non-trivial, most likely you leave these questions unanswered. It’s too much of a hassle, then the alternative, creating a dummy solution to test it out, may be no less of a hassle. But, sticking an extra assert in an existing unit test is easy and fast.

Unit Tests Keep Methods and Classes Succinct and Focussed

This is an example of a method you would never see in an actively unit-tested code base. What does this method do, exactly? Who knows… probably not you, and most likely not the person or people that ‘wrote’ (cobbled together over time) it. (Full disclosure–I just made this up to illustrate a point.)

We’ve all seen methods like this. Cyclomatic complexity off the charts, calls to global state sprinkled in, mixed concerns, etc. You can look at it without knowing the most common path through the code, the expected path through the code, or even whether or not all paths are reachable. Unit testing is all about finding paths through a method and seeing what is true after (and sometimes during) execution. Good luck here figuring out what should be true. It all depends on what global state returns, and, even if you somehow mock the global state, you still have to reverse engineer what needs to be true to proceed through the method.

If this method had been unit-tested from its initial conception, I contend that it would never look anything like this. The reasoning is simple. Once a series of tests on the method become part of the test suite, adding conditionals and one-offs will break those tests. Therefore, the path of least resistance for the new requirements becomes creating a new method or class that can, itself, be tested. Without the tests, the path of least resistance is often handling unique cases inline–a shortsighted practice that leads to the kind of code above.

Unit Tests Encourage Inversion of Control

In a previous post, I talked about reasoning about code in two ways: (1) command and control and (2) building and assembling. Most people have an easier time with and will come to prefer command and control, left to their own devices. That is, in my main method, I want to create a couple of objects and I want those objects to create their dependencies and those dependencies to create their dependencies and so on. Like the CEO of the company, I want to give a few orders to a few important people and have all of the hierarchical stuff taken care of to conform to my vision. That leads to code like this:

So, in command and control style, I just tell my classes that I want a car, and my wish is their command. I don’t worry about what engine I want or what transmission I want or anything. Those details are taken care of for me. But I also don’t have a choice. I have to take what I’m given.

Since my linked post addresses the disadvantages of this approach, I won’t rehash it here. Let’s assume, for argument’s sake, that dependency inversion is preferable. Unit testing pushes you toward dependency inversion.

The reason for that is well illustrated by thinking about testing Car’s “start” method. How would we test this? Well, we wouldn’t. There’s only one line in the method and it references something completely hidden from us. But, if we changed Car and had it receive an engine through its constructor, we could easily create a friendly/mock engine and then make assertions about it after Car’s start method was called. For example, maybe Engine has an “IsStarted” property. Then, if we inject Engine to Car, we have the following simple test:

After you spend some time unit testing regularly, you’ll find that you come to look at the new keyword with suspicion that you never did before. As I write code, if I find myself typing it, I think “either this is a data transfer object or else there better be a darned good reason for having this in my class.”

Dependency-inverted code is better code. I can’t say it any plainer. When your code is inverted, it becomes easier to maintain and requirements changes can be absorbed. If Car takes an Engine instead of making one, I can later create an inheritor from Engine when my requirements change and just give that to Car. That’s a code change of one modified line and a new class. If Car creates its own Engine, I have to modify Car any time something about Engine needs to change.

Unit Testing Encourages Use of Interfaces

By their nature, interfaces tend to be easier to mock than simple instances–even virtual ones. While I can’t speak to every mocking framework out there, it does seem to be a rule that the easiest way to mock things is using interfaces. So when you’re testing your code, you’ll tend to favor interfaces when all things are equal, since that will make test writing easier.

I believe that this favoring of interfaces is helpful for the quality of code as well. Interfaces promote looser coupling than any other way of maintaining relationships between objects. Depending on an interface instead of a concrete implementation allows decoupling of the “what” from the “how” question when programming. Going back to the engine/car example, if I have a Car class that depends on an Engine, I am tied to the Engine class. It can be sub-classed, but nevertheless, I’m tied to it. If its start method cannot be overridden and throws exceptions, I have to handle them in my Car’s start method.

On the other hand, depending on an engine interface decouples me from the engine implementation. Instead of saying, “alright, specific engine, start yourself and I’ll handle anything that goes wrong,” I’m saying, “alright, nameless engine, start yourself however it is you do that.” I don’t necessarily need to handle exceptions unless the interface contract allows them. That is, if the interface contract stipulates that IEngine’s start method should not throw exceptions, those exceptions become Engine’s responsibility and not mine.

Generally speaking, depending on interfaces is very helpful in that it allows you to make changes to existing code bases more easily. You’ll come to favor addressing requirements changes by creating new interface implementations rather than by going through and modifying existing implementations to handle different cases.

Regularly Unit Testing Makes You Proactive Instead of Reactive About the Unexpected

If you spend a few months unit testing religiously, you’ll find that a curious thing starts to happen. You’ll start to look at code differently. You’ll start to look at x.y() and know that, if there is no null check for x prior to that call, an exception will be thrown. You’ll start to look at if(x < 6) and know that you're interested in seeing what happens when x is 5 and x is 6. You'll start to look at a method with parameters and reason about how you would handle a null parameter if it were passed in, based on the situation. These are all examples of what I call "proactive," for lack of a better term. The reactive programmer wouldn't consider any of these things until they showed up as the cause of a defect. This doesn't happen magically. The thing about unit tests that is so powerful here is that the mistakes you make while writing the tests often lead you to these corner cases. Perhaps when writing the test, you pass in "null" as a parameter because you haven't yet figured out what, exactly, you want to pass in. You forget about that test, move on to other things, and then later run all of your tests. When that one fails, you come back to it and realize that when null is passed into your method, you dereference it and generate an unhandled exception. As this goes on over the course of time, you start to recognize code that looks like it would be fragile in the face of accidental invocations or deliberate unit test exercise. The unit tests become more about documenting your requirement and guarding against regression because you find that you start to be able to tell, by sight, when code is brittle. This is true of unit tests because the feedback loop is so tight and frequent. If you're writing some class without unit tests, you may never actually use your own class. You write the class according to what someone writing another class is going to pass you. You both check in your code, never looking at what happens when either of you deviate from the expected communication. Then, three months later, someone comes along and uses your class in another context and delivers his code. Another three months after that, a defect report lands on your plate, you fire up your debugger, and figure out that you're not handling null. And, while some learning will occur in this context, it will be muted. You're six months removed from writing that code. So while you learn in principle that null parameters should be handled, you aren't getting feedback. It's essentially the difference between a dieter having someone slap his hand when he reaches for a cookie, or weigh him six months later and tell him that he shouldn't have eaten that cookie six months ago. One is likely to change habits while the other is likely to result in a sigh and a "yeah, but are ya gonna do?"

Conclusion

I can probably think of other examples as well, but this post is already fairly long. I sincerely believe that the simple act of writing tests and getting immediate feedback on one’s code makes that person a better programmer more quickly than ignoring the tests. And, if you have a department where your testers are all writing tests, they’re becoming better designers/programmers and adopting good practices while doing productive work and raising the confidence level in the software that they’re producing.

I really cannot fathom any actual disadvantage to this practice. To me, the “obviously” factor of this is now on par with whether or not wearing a seat belt is a good idea.