DaedTech

Stories about Software

By

A Blog Grows Up: DaedTech Year in Review for 2012

Setting the Stage

Oh, how naive I was even 12 months ago (and I have no doubt 12 months from now I’ll be saying the same thing). But before I get to that, I’ll travel back in time a little further.

The year was 2010 and I had just purchased the domain name daedtech.com along with a hosting plan. I was finishing up my MS in computer science via night school and realized that (1) I would have a lot of free time I didn’t used to have and (2) I would miss having to write and think critically about programming and software in a way that went beyond my 9-5 work. So the DaedTech brand grew out of a decision to use that spare time to freelance and the DaedTech blog grew out of writings and ramblings I had lying around anyway and the desire to keep writing.

Why am I talking about 2010? Because 2010’s decision gave rise to the 2011 approach to blogging that I had, which was to write a post, publish it, sit back and say “alright interwebs, come drink from the fountain of my insight.” There were a lot of crickets in 2011, needless to say. My blog was really more of a personal journal that happened to be publicly displayed. 2012 was the year I figured out that there was more to blogging than simply generating content.

What’s Happened This Year

If blogging isn’t just about generating content, then what is it about?  I’d say it’s about generating content and then taking the responsibility for getting that content in front of people who are interested in seeing it.  It’s not enough simply to toss some text over the wall — you have to make it visually appealing  (or at least approachable), engaging, accessible, and interactive.  The most successful blog posts are ones that start, rather than end, conversations because they resonate with the community and encourage discussion and further research.

The following is a list of changes I made to the blog and to my approach to blogging this past year, and the results in terms of readership growth and traffic have been pronounced.

  • Installed Google Analytics in order to have granular, empirical data about visitors to different parts of the site
  • Added interactive social media buttons to allow people to like/plus/tweet/etc posts they liked.
  • Made it easier to subscribe to posts via RSS.
  • Overhauled the category and tag scheme.
  • Started announcing new posts via social media.
  • Adopted the practice of writing posts ahead of time and publishing them with a regular cadence (Monday, Wednesday, Friday) instead of popping them off whenever I felt like it.
  • Routinely participated in discussions/comments on others’ blogs instead of just reading them.
  • Introduced Disqus to manage my comments.
  • Enlisted the help of a copy editor.
  • Improved the speed/performance of the site.
  • Switched from feedburner to feedblitz for RSS subsriptions.
  • Developed and/or fleshed out recurring post series (design patterns, practical math, home automation, abstractions).
  • Adopted the practice of routinely including images, code snippets or both to break up monotonous text.

These actions (and probably to some degree just being around longer) have yielded the following results:

  • RSS subscribers have more than tripled.
  • Average daily visits have increased by about 300%
  • Page Rank has increased from 1 to 3.
  • Trackbacks and mentions from other blogs are routine as compared to previously nonexistent.
  • Comments per post average is up a great deal.
  • I now receive posting requests.
  • DaedTech posts have been ‘syndicated’ on Portuguese (Brazil) and French language sites.
  • Referral traffic now frequently comes from sites like Hacker News and reddit.

As far as being a programmer goes, I’ve increased my experience slightly in the last year. After all, having spent the last 14 years writing code isn’t all that much different than having spent the last 13 years writing code. But having been a blogger for 2 years is much different than having been a blogger for 1 — at the risk of overconfidence, I think I’m  starting to get the hang of this thing to some extent.

Lessons Learned

I’ve contemplated for a while doing a post along the lines of “So You Want to be a Dev Blogger,” but have held off, largely because of a feeling along the lines of the one Scott Hanselman describes in his post about being a phony. I may still do a post like that, but I think this is largely that post, framed in terms of what I’ve learned and how it’s humbling to look back at my own naivete rather than “prepare to start gathering the pearls of wisdom that I’m going to drop on you.”

The lessons that I’ve learned and hope to keep applying all come back to the idea that there’s so much more to blogging than simply knowing about programming or being able to write about that knowledge. There are small lessons from a whole smattering of disciplines to be woven in: UX, marketing, SEO, psychology, etc. You don’t need to be an expert in any of these things, but you need at least to be nominally competent. You also need to do a lot of looking around at successful people to understand what they do.  It was by doing this and by talking to other bloggers that I figured out the wisdom of various ideas like all of the social media buttons and the Disqus commenting system.  None of these things is rocket science and they’re certainly within any aspiring blogger’s realm of capability, but a lot of them have that kind of “man, I never would have thought about that” air to them.

Fun Facts

Below are my most popular posts of 2012, and you can see that there is nearly a dead heat between posts that were popular and read a lot when written and posts that draw a lot of google hits:

  1. Casting is a Polymorphism Fail
  2. How to Keep Your Best Programmers
  3. WPF and Notifying Property Changed
  4. How Developers Stop Learning: Rise of the Expert Beginner
  5. Static and New are Like Inline
  6. Adding a Google Map to Android Application

Here are the countries in which DaedTech is most popular:

  1. USA
  2. United Kingdom
  3. India
  4. Germany
  5. Canada
  6. Australia
  7. France
  8. Belgium
  9. Netherlands
  10. Poland

Here are the sources of the most referrals:

  1. Twitter
  2. reddit
  3. Facebook
  4. Hacker News
  5. Disqus
  6. LinkedIn
  7. Instapaper
  8. Stack Overflow
  9. Google+
  10. Stack Exchange

Last and Not Least

It’s fun to reflect back on the lessons that I’ve learned and the fun that I’ve had blogging. It’s always interesting to look at statistics about, well, anything if you’re a stat-head and analytics nut like me. But the most important thing, and arguably the only thing, that makes a blog is the readership. And so I’d like to take this opportunity while being reflective to sincerely thank you for reading, tweeting, commenting, forwarding, or really even just glancing at the blog every now and then. With all of my changes that I’ve listed above, I’ve set the stage to make readership easier, but it is really you and your readership that are the difference between DaedTech as it exists now and the site as it existed in early 2011 when I was speaking only to an empty room and comments SPAM bots. So once again, thank you, and may you have a Happy New Year and a great 2013!

By

Just Starting with JustMock

A New Mocking Tool

In life, I feel that it’s easiest to understand something if you know multiple ways of accomplishing/using/doing/etc it. Today I decided to apply that reasoning to automatic mocking tools for .NET. I’m already quite familiar with Moq and have posted about it a number of times in the past. When I program in Java, I use Mockito, so while I do have experience with multiple mocking tools, I only have experience with one in the .NET world. To remedy this state of affairs and gain some perspective, I’ve started playing around with JustMock by Telerik.

There are two versions of JustMock: “Lite” and “Elevated.” JustMock Lite is equivalent to Moq in its functionality: able to mock things for which their are natural mocking seems, such as interfaces, and inheritable classes. The “Elevated” version provides the behavior for which I had historically used Moles — it is an isolation framework. I’ve been meaning to take this latter for a test drive at some point since the R&D tool Moles has given way to Microsoft “Fakes” as of VS 2012. Fakes ships with Microsoft libraries (yay!) but is only available with VS ultimate (boo!).

My First Mock

Installing JustMock is a snap. Search for it in Nuget, install it to your test project, and you’re done. Once you have it in place, the API is nicely discoverable. For my first mocking task (doing TDD on a WPF front-end for my Autotask Query Explorer), I wanted to verify that a view model was invoking a service method for logging in. The first thing I do is create a mock instance of the service with Mock.Create<T>(). Intuitive enough. Next, I want to tell the mock that I’m expecting a Login(string, string) method to be called on it. This is accomplished using Mock.Arrange().MustBeCalled(). Finally, I perform the actual act on my class under test and then make an assertion on the mock, using Mock.Assert().

A couple of things jump out here, particularly if you’re coming from a background using Moq, as I am. First, the semantics of the JustMock methods more tightly follow the “Arrange, Act, Assert” convention as evidenced by the necessity of invoking Arrange() and Assert() methods from the JustMock assembly. The second thing that jumps out is the relative simplicity of assertion versus arrangement. In my experience with other mocking frameworks, there is a tendency to do comparably minimal setup and have a comparably involved assertion. Conceptually, the narrative would be something like “make the mock service not bomb out when Login() is called and later we’ll assert on the mock that some method called login was called with username x and password y and it was called one time.” With this framework, we’re doing all that description up front and then in the Assert() we’re just saying “make sure the things we stipulated before actually happened.”

One thing that impressed me a lot was that I was able to write my first JustMock test without reading a tutorial. As regular readers know I consider this to be a strong indicator of well-crafted software. One thing I wasn’t as thrilled about was how many overloads there were for each method that I did find. Regular readers also know I’m not a huge fan of that. But at least they aren’t creational overloads and I suppose you have to pay the piper somewhere and I’ll have either lots of methods/classes in Intellisense or else I’ll have lots of overloads. This bit with the overloads was not a problem in my eyes, however, as I haven’t explored or been annoyed by them at all — I just saw “+10 overloads” in Intellisense and thought “whoah, yikes!”

Another cool thing that I noticed right off the bat was how helpful and descriptive the feedback was when the conditions set forth in Arrange() didn’t occur:

JustMockFeedback

It may seem like a no-brainer, but getting an exception that’s helpful both in its type and message is refreshing. That’s the kind of exception I look at and immediately exclaim “oh, I see what the problem is!”

Matchers

If you read my code critically with a clean code eye in the previous section, you should have a bone to pick with me. In my defense, this snippet was taken post red-green and pre-refactor. Can you guess what it is? How about the redundant string literals in the test — “asdf” and “fdsa” are repeated twice as the username and password, respectively. That’s icky. But before I pull local variables to use there, I want to stop and consider something. For the purpose of this test, given its title, I don’t actually care what parameters the Login() method receives — I only care that it’s called. As such, I need a way to tell the mocking framework that I expect this method to be called with some parameters — any parameters. In the world of mocking, this notion of a placeholder is often referred to as a “Matcher” (I believe this is the Mockito term as well).

In JustMock, this is again refreshingly easy. I want to be able to specify exact matches if I so choose, but also to be able to say “match any string” or “match strings that are not null or empty” or “match strings with this custom pattern.” Take a look at the semantics to make this happen:

For illustration purposes I’ve inserted line breaks in a way that isn’t normally my style. Look at the Arg.IsAny and Arg.Matches line. What this arrangement says is “The mock’s login method must be called with any string for the username parameter and any string that isn’t null or empty for the password parameter.” Hats off to you, JustMock — that’s pretty darn readable, discoverable and intuitive as a reader of this code.

Loose or Strict?

In mocking there is a notion of “loose” versus “strict” mocking. The former is a scenario where some sort of default behavior is supplied by the mocking framework for any methods or properties that may be invoked. So in our example, it would be perfectly valid to call the service’s Login() method whether or not the mock had been setup in any way regarding this method. With strict mocking, the same cannot be said — invoking a method that had not been setup/arranged would result in a runtime exception. JustMock defaults to loose mocking, which is my preference.

Static Methods with Mock as Parameter

Another thing I really like about JustMock is that you arrange and query mock objects by passing them to static methods, rather than invoking instance methods on them. As someone who tends to be extremely leery of static methods, it feels strange to say this, but the thing that I like about it is how it removes the need to context switch as to whether you’re dealing with the mock object itself or the “stub wrapper”. In Moq, for instance, mocking occurs by wrapping the actual object that is the mocking target inside of another class instance, with that outer class handling the setup concerns and information recording for verification. While this makes conceptual sense, it turns out to be rather cumbersome to switch contexts for setting up/verifying and actual usage. Do you keep an instance of the mock around locally or the wrapper stub? JustMock addresses this by having you keep an instance only of the mock object and then letting you invoke different static methods for different contexts.

Conclusion

I’m definitely intrigued enough to keep using this. The tool seems powerful and usage is quite straightforward, intuitive and discoverable. Look for more posts about JustMock in the future, including perhaps some comparisons and a full fledged endorsement, if applicable (i.e. I continue to enjoy it), when I’ve used it for more than a few hours.

By

Merry Christmas!

For all DaedTech readers that celebrate Christmas, here’s hoping yours is Merry. I will be traveling for most of the next week, but will have internet access and time off, so I will most likely have another post or two this week in spite of the holiday.

By

Linq Order By When You Have Property Name

Without reflection, we go blindly on our way, creating more unintended consequences, and failing to achieve anything useful.
–Margaret J. Wheatley

Ordering By a Column Name

Quick tip today in case anyone runs into this.  Frequently you have some strongly typed object and you want to order by some property on that object.  No problem — Linq’s IEnumerable.OrderBy() to the rescue.  But what about when you don’t have a strongly typed object at runtime and you only have the property’s name?

In a little project I’m working on at the moment, this came up. In this project, I’m parsing SQL queries (a subset of SQL, anyway) and translating these queries into web service requests for Autotask. All of the Autotask web service’s entities are children of a base class simply called Entity. Entities have ids in common, but little else. So the situation is that I’m going to get a query of the form “SELECT * FROM Account ORDER BY AccoutName” (i.e. just a string) and I’m going to have to pull out of the API a series of strongly typed objects and figure out how to sort them by “AccountName” at runtime. Tricky part is that I don’t know at compile time what object type I’ll be getting back, much less which property on that type I’ll be using to sort. So something like entities.OrderBy(e => e.AccountName) is obviously right out.

So what we need is a way of mapping the string to a property and then matching that property to a strongly typed value on the object that can be used for ordering.

This method first checks a couple of preconditions: actual value supplied for the property name (obviously) and that any entities exist for sorting. This last one might seem a little strange, but it makes sense when you think about it. The reason it makes sense, if you’ll recall my post on type variance, is that the type of the enumerable is generic and strictly a compile time designation. As such, this method is going to be compiled as IEnumerable rather than IEnumerable or any other derivative.

Now, if you did this:

…you would have a problem. Since T is going to be compiled as Entity, you’re going to be looking for properties of the derived class using the type information associated with the base class, which will fail, causing the returned propertyInfo to be null and then a null reference exception on the next line. Since we have no way of knowing at compile time what sort of entity we’re going to have, we have to check at run time. And, in order to do that, we need an actual instance of an entity. If we just have an empty enumerable, this is strictly unknowable.

My solution here is a private static method because I have no use for it (yet) in any other scope or class. But, if you were so inclined you could create an extension method pretty easily:

If you were going to do this, I’d suggest making this method a tad more robust, however as you might get a variety of interesting edge cases thrown at it.

By

A Metaphor to Help You Suck at Writing Software

“No plan survives contact with the enemy” –Helmuth von Moltke the Elder

Bureaucracy 101

Let’s set the scene for a moment. You’re a workaday developer in a workman kind of shop. A “waterfall” shop. (For back story on why I put quotes around waterfall, see this post). There is a great show of force when it comes to building software. Grand plans are constructed. Requirements gathering happens in a sliding sort of way where there is one document for vague requirements, another document for more specific requirements, a third document for even more specific requirements than that, and repeat for a few more documents. Then, there is the spec, the functional spec, the design spec, and the design document. In fact, there are probably several design documents.

There aren’t just the typical “waterfall” phases of requirements->design->code->test->toss over the wall, but sub-phases and, when the organism grows large enough, sub-sub-phases. There are project managers and business managers and many other kinds of managers. There are things called change requests and those have their own phases and documents. Requirements gathering is different from requirements elaboration. Design sub-phases include high-level, mid-level and low-level. If you carefully follow the process, most likely published somewhere as a mural-sized state machine or possibly a Gantt chart unsurpassed in its perfect hierarchical beauty, you will achieve the BUFD nirvana of having the actual writing of the code require absolutely no brain power. Everything will be so perfectly planned that a trained ape could write your software. That trained ape is you, workaday developer. Brilliant business stakeholder minds are hard at work perfecting the process of planning software in such fine grained detail that you need not trouble yourself with much thinking or problem solving.

Dude, wait a minute. Wat?!? That doesn’t sound desirable at all! You wake up in the middle of the night one night, sit bolt upright and are suddenly fundamentally unsure that this is really the best approach to building a thing with software. Concerned, you approach some kind of senior business project program manager and ask him about the meaning of developer life in your organization. He nods knowingly, understandingly and puts one arm on your shoulders, extending the other out in broad, professorial arc to help you share his vision. “You see my friend,” he says, “writing software is like building a skyscraper…” And the ‘wisdom’ starts to flow. Well, something starts to flow, at any rate.

Let’s Build a Software Skyscraper

Like a skyscraper, you can’t just start building software without planning and a lot of upfront legwork. A skyscraper can’t simply be assembled by building floors, rooms, walls, etc independently and then slapping them altogether, perhaps interchangeably. Everything is necessarily interdependent and tightly coupled. Just like your software. In the skyscraper, you simply can’t build the 20th floor before the 19th floor is built and you certainly can’t interchange those ‘parts’ much like in your software you can’t have a GUI without a database and you can’t just go swapping persistence models once you have a GUI. In both cases every decision at every point ripples throughout the project and necessarily affects every future decision. Rooms and floors are set in stone in both location and order of construction just as your classes and modules in a software project have to be built in a certain order and can never be swapped out from then on.Jenga

But the similarities don’t end with the fact that both endeavors involve an inseparable web of complete interdependence. It extends to holistic approaches and cost as well. Since software, like a skyscraper, is so lumbering in nature and so permanent once built, the concept of prototyping it is prima facie absurd. Furthermore, in software and skyscrapers, you can’t have a stripped-down but fully functional version to start with — it’s all or nothing, baby. Because of this it’s important to make all decisions up-front and immediately even when you might later have more information that would lead to a better-informed decision. There’s no deferral of decisions that can be made — you need to lock your architecture up right from the get-go and live with the consequences forever, whatever and however horrible they might turn out to be.

And once your software is constructed, your customers better be happy with it because boy-oh-boy is it expensive, cumbersome and painful to change anything about it. Like replacing the fortieth floor on a skyscraper, refactoring your software requires months of business stoppage and a Herculean effort to get the new stuff in place. It soars over the budget set forth and slams through and past the target date, showering passerby with falling debris all the while.

To put it succinctly in list form:

  1. There is only one sequence in which to build software and very little opportunity for deviation and working in parallel.
  2. Software is not supposed to be modular or swappable — a place for everything and everything in its place
  3. The concept of prototyping is nonsensical — you get one shot and one shot only.
  4. It is impossible to defer important decisions until more information is available. Pick things like database or markup language early and live with them forever.
  5. Changing anything after construction is exorbitantly expensive and quite possibly dangerous

Or, to condense even further, this metaphor helps you build software that is brittle and utterly cross-coupled beyond repair. This metaphor is the perfect guide for anyone who wants to write crappy software.

Let’s Build an Agile Building

Once you take the building construction metaphor to its logical conclusion, it seems fairly silly (as a lot of metaphors will if you lean too heavily on them in their weak spots). What’s the source of the disconnect here? To clarify a bit, let’s work backward into the building metaphor starting with good software instead of using it to build bad software.

AgileBuildingA year or so ago, I went to a talk given by “Uncle” Bob Martin on software professionalism. If I could find a link to the text of what he said, I would offer it (and please comment if you have one) but lacking that, I’ll paraphrase. Bob invited the audience to consider a proposition where they were contracting to have a house built and maintained with a particular contractor. The way this worked was you would give the contractor $100 and he would build you anything you wanted in a day. So, you could say “I want a two bedroom ranch house with a deck and a hot-tub and 1.5 bathrooms,” plop down your $100 and come back tomorrow to find the house built to your specification. If it turned out that you didn’t like something about it or your needs changed, same deal applied. Want another wing? Want to turn the half bath into a full bath? Want a patio instead of a deck? Make your checklist, call the contractor, give him $100 and the next day your wish would be your house.

From there, Bob invited audience members to weigh two different approaches to house-planning: try-it-and-see versus waterfall’s “big design up front.” In this world, would you hire expert architects to form plans and carpenters to flesh them out? Would you spend weeks or months in a “planning phase”? Or would you plop down $100 and say, “well, screw it — I’ll just try it and change it if I don’t like it?” This was a rather dramatic moment in the talk as the listener realized just before Bob brought it home that given a choice between agile, “try it and see” and waterfall “design everything up front” nobody sane would choose the latter. The “waterfall” approach to houses (and skyscrapers) is used because a better approach isn’t possible and not because it’s a good approach when there are alternatives.

Wither the Software-Construction Canard?

Given the push toward Agile software development in recent years and the questionable parallels of the metaphor in the first place, why does it persist? There is no shortage of people who think this metaphor is absurd, or at least misguided:

  1. Jason Haley, “It’s not like Building a House”
  2. Terence Parr, “Why writing software is not like engineereing”
  3. James Shore, “That Damned Construction Analogy”
  4. A whole series of people on stackoverlow
  5. Nathaniel T. Schutta, Why Software Development IS Like Building a House (Don’t let the title fool you – give this one a detailed read)
  6. Thomas Guest, “Why Software Development isn’t Like Construction”

If you google things like “software construction analogy” you will find literally dozens of posts like these.

So why the persistence? Well, if you read the last article, by Thomas Guest, you’ll notice a reference to Steve McConnell’s iconic book “Code Complete.” This book has an early chapter that explores a variety of metaphors for software development and offers this one up. In my first daedtech post I endorsed the metaphor but thought we could do better. I stand by that endorsement not because it’s a good metaphor for how software should be developed but because it’s a good metaphor for how it is developed. As in our hypothetical shop from the first section of the post, many places do use this approach to write (often bad) software. But the presence of the metaphor in McConnell’s book and for years and years before that highlights one of the main reasons for persistence: interia. It’s been around a long time.

But I think there’s another, more subtle reason it sticks around. Hard as it was to find pro posts about the software-construction pairing, the ones I did find share an interesting trait. Take a look at this post, for instance. As “PikeWake” gets down to explaining the metaphor, the first thing that he does is talk about project managers and architects (well, the first thing is the software itself, but right after that come the movers and shakers). Somewhere below that the low-skill grunts who actually write the software get a nod as well. Think about that for a moment. In this analogy, the most important people to the software process are the ones with corner offices, direct reports and spreadsheets, and the people who actually write the software are fungible drones paid to perform repetitive action, rather than work. Is it any wonder that ‘supervisors’ and other vestiges of the pre-Agile, command and control era love this metaphor? It might not make for good software, but it sure makes for good justification of roles. It’s comfortable in a world where companies like github are canning the traditional, hierarchical model, valuing the producers over the supervisors, and succeeding.

Perhaps that’s a bit cynical, but I definitely think there’s more than a little truth there. If you stripped out all of the word documents, Gantt charts, status meetings and other typical corporate overhead and embraced a world where developers could self-organize, prioritize and adapt, what would people with a lot of tenure but not a lot of desire or skill at programming do? If there were no actual need for supervision, what would happen? These can be unsettling, game changing questions, so it’s easier to cast developers as low-skill drones that would be adrift without clever supervisors planning everything for them than to dispense with the illusion and realize that developers are highly skilled, generally very intelligent knowledge workers quite capable of optimizing processes in which they participate.

In the end, it’s simple. If you want comfort food for the mid-level management set and mediocrity, then invite someone in to sweep his arm professorially and spin a feel-good tale about how building software is like building houses and managing people is like a father nurturing his children. If you want to be effective, leave the construction metaphor in the 1980s where it belongs.

By

How to Disable Controls During Postback in ASP

The other day, I was working on a page in a webforms app where a postback, triggered by a button click, kicked off a bit of processing that would run from 10-20 seconds. While this is going on, it makes sense to disable the clicked button and other controls, for that matter. Since the processing occurs on the server, the only way to achieve this effect is by disabling the buttons and other controls on the client side, by using javascript. The following is the series of steps leading up to getting this right. If you just want to see what worked, you can skip to the end.

The first thing I did was find a bit of jquery that would disable things on the page. I put this into the user control in which I was doing this:

<script>
    function disableOnPostback() {
        $(":input").attr("disabled", true);
    }
</script>

From there, I found that the way to distinguish between a server-side click handler (“OnClick” property) and a client-side one was to use OnClientClick, like so:

Here we have some standard button boilerplate, the server side event handler “SearchButton_Click” and the new OnClientClick that triggers javascript invocation and our jquery implementation. I was pretty pumped about this and ready to have my search button disable all client side controls and disable them until the server returned a response. I fired it up, clicked the search button, and absolutely nothing happened. Not only was nothing disabled, but there was no postback. After some googling around, someone recommended adding “return true;” after the disableOnPostback() call. Apparently any intervening client side handler not returning true is assumed to return false which stops the postback. So here is the new attempt:

This had no discernible effect, and after some searching, I found that the meat of the issue here is that disabling the button apparently also disables its ability to trigger a postback. We need to tell the button to fire the postback regardless, which apparently can be accomplished with UseSubmitBehavior=false as a property.

I tried this and, finally, something different! Only problem was that it was a partial success. The disabling of controls finally worked, but the postback never happened. On a hunch, I took out the return true and arrived at my final answer:

This combined with the jquery at the top of the page did the trick. So if you have a button that triggers a postback with a lengthy operation and you want to disable all controls until the operation completes and returns a response, this should do the trick. I am not yet an expert in under-the-covers webforms particulars, so the theory is still a little hazy on my end, but hopefully this helps anyone in a similar position to me. Also, if you are an expect in this stuff, please feel free to weigh in on the theory at play here.

On final thing that I’ll mention is that I did find something called Postback Ritalin during my searches. This seems to offer a control to take care of this for you, though I didn’t really want to introduce any third party dependencies, so I didn’t try anything with it myself.

By

Discoverability Instead of Training and Manuals

Documentation and Training as Failures

Some time back, I was listening to someone explain the finer points of various code that he had written when he lamented the lack of documentation and training available for prospective users of this code. I thought to myself rather blithely and flippantly, “why – just write the code so that documenting it and training people to use it aren’t necessary.” I attributed this to being in a peevish mood or something, but reflecting on this later, I thought earnestly, “snarky Erik is actually right about this.”

Think about the way software development generally goes, especially if you’re developing code to server as a framework or utility for teammates and other developers. You start off with clean code and good intentions and you hammer away at making some functional software. Often things go well, but here and there you hit snags and you do a bit of duct-taping and work-around-ing (working around?), vowing to return later to straighten things out. Sometimes you do just that, but other times you realize that time and budget are finite resources for the effort and you reconcile with shipping something that’s not quite perfect.

But you don’t just ship something imperfect, because you’re diligent and responsible. What do you do instead? You go into those nasty areas of the code and you write inline comments, possibly containing apologies. You make sure that the XML/Java doc comments above the methods/classes are quite thorough as well and, for good measure, you probably even writeup some kind of manual or Word document, perhaps with a Visio diagram. Where the code is clear, you let it speak for itself and where it’s less than clear, you document.

We could put this another, perhaps more blunt way: “we generally try to write clean code and we document when we fail to do so.” We might reasonably think of documentation as something that we do when our work and intentions fail to speak for themselves. This seems a bit iconoclast in the face of conventional methods of communicating and processing information. I grew up as a programmer reading through the “man pages” to understand all manner of *Nix command line utilities, system calls, etc. I learned the nitty-gritty of how concepts like semaphores, and IPC and threading worked in this fashion so it seems a bit blasphemous, even to me, to accuse the authors of these APIs at failing to be clear or, really, failing in any way.

And yet, here we are. To be clear, I don’t think that writing code for which clients need to read manuals is a failure of design or of correctness or of a project or utility on the whole. But I do think it’s a failure to write self documenting code. And I think that for decades, we’ve had a culture in which this wasn’t viewed as a failure of any kind. What are we chided to do when we get a new appliance or gadget? Well, read the manual. There’s even an iconic acronym of exasperation for people who don’t do so prior to asking questions: RTFM. In the interest of keeping the blog’s PG rating, I won’t say here what it stands for. In this culture, the engineering particulars and internal mechanisms of things have been viewed as unknowable mysteries and the means by which communication is offered and understanding reached is large and often formidable manuals with dozens of pages of appendices, notes, and works cited. But is that really the best way to do things in all cases? Aren’t there times where it might be a lot better to make something that screamed how it should be used instead of wasting precious time?

Lifejacket_Instructions

Image courtesy of “AlMare” via Wikimedia Commons

A Changing Culture

An interesting thing has happened in recent years, spurred on largely by Apple, initially, and now I’d say by the mobile computing movement in general, since Google and Microsoft have followed suit in their designs. Apple made it cool to toss the manual and assume that it is the responsibility of the maker of the thing, rather than the user, to ensure that understanding is reached. In the development world, champions of clean, self-documenting code have existed prior to whatever Apple might have been doing in the popular market, but the concept certainly got a large, public boost from Apple and its marketing cachet and those who subsequently got on board with the movement.

Look at the current state of applications being written. This fall, I had the privilege of attending That Conference, Dotnet Rocks Edition and seeing Lwin Maung speak about mobile concepts and the then soon-to-be-released Windows 8 and its app ecosystem. One of the themes of the talk was how apps informed you of how to use them in intuitive ways. You didn’t read a manual to know that the news app had additional content — it told you by leaving the next story link halfway off the side of the screen, practically begging you to paw at it and scroll to the side. The idea of Windows with lots of headers at the top from which you can drill hierarchically into the application is gone and being replaced instead by visual cues that are borderline impossible to screw up.

As this becomes popular in terms of user experience, I submit that it should also become popular with software development. If you find yourself writing some method with the signature DoStuff(bool, bool, int, bool, string, bool) you’ll probably (hopefully) think “man, I better document this because no one will ever figure it out.” But I ask you to take it a step further. If you have the time to document it, then why not spend that time fixing it instead of explaining yourself through documentation? Rename DoStuff to describe exactly what stuff it does, make the parameters significantly fewer, get rid of the Booleans, and make it something that’s pretty much impossible to misunderstand, like string.RemoveCharactersFromTheEnd(6). I bet you don’t need multiple appendices or even a manual to figure out what that does.

Please note that I’m not suggesting that we toss out all previous ways of doing things or stop documenting altogether. Documentation certainly has a time and a place and not all products or APIs are ones that lend themselves to being completely discoverable. What I am suggesting is that we change our culture as developers from “RTFM!!!!” to “could I have made that clearer?” We’ve come a long way as the discipline of programming matures and we have more and more stakeholders who are less and less technical depending on us for more and more things. Communication is increasingly important and communication on clear, broadly understandable terms at that. You’re no longer writing methods being consumed by a handful of fellow geeks that are using your code to put together a BBS about how to program in COBOL. You’re no longer writing code where each byte of memory and disk space is precious so it’s far better to be verbose in voluminous manuals than method or variable names. You’re (for the most part) no longer writing code where optimizing a few cycles trumps readability. You’re writing code in a time when terms like “agile” and “maintainable” reign supreme, there’s no real cost to self-describing code, and the broader popular in general expect their technology to be discoverable. It’s a great time to be a developer — embrace it.

By

Scoping And Accessibility Quirks in C#

As I mentioned recently, I’ve taken to using an inheritance scheme in my approach to unit testing. Because of the mechanics of this scheme, making a class under test internal this morning brought to light two relatively obscure properties of scoping and visibility in C# that you might not be aware of:

  1. Internal can be “less visible” than protected.
  2. Private isn’t always private.

Let me explain by showing the situation in which I found myself. As part of an open source project I’m working on at the moment to allow SQL-like querying of Autotask data through its API, I’ve been writing a set of tests on a class called “SqlQuery” in which I take a SQL statement and parse out the parts I’m interested in:

Up until now the class under test, SqlQuery, has been public, but I realize that this is an abstraction that only matters in the actual lower layer assembly rather than at the GUI level, so I made it internal and added an InternalsVisibleTo to the properties of the assembly under test. With that in place, I downgraded the SqlQuery class to internal and was momentarily surprised by a compiler error of “Inconsistent accessibility: property type ‘AutotaskQueryService.SqlQuery’ is less accessible than property ‘AutotaskQueryServiceTest.SqlQueryTest.Target'”.

KoalaWat

On its face, this seems crazy — “internal” is less accessible than “protected”? But when you think about it, this actually makes sense. “Internal” means “nobody outside of this assembly can see it” and protected means “nobody except for this class and its inheritors can see it.” So what happens if I create a third assembly and declare a class in it that inherits from SqlQueryTest? This class has no visibility to the assembly under test and its internals, but it would have visibility to Target. Hence the strange-seeming but quite correct compiler error. One way to get rid of this error is to make SqlQueryTest internal, and that actually compiled and all tests ran, but I don’t like that solution in the event that I want tests in that class and not just its nested children. I decided on another option: making Target private.

If you look at the code snippet above, are you now thinking “but that won’t compile!” After all “Columns” inherits from SqlQueryTest and uses Target and I’ve now just made Target private, so Columns will lose access to it. Well, no, as it turns out. The private scoping in a class means that only the things between the {} of the class can see it. Our nested class here happens to be one of those things. So the scoping trumps the hierarchy in this instance. This can easily be confirmed by changing Target to static and removing the inheritance relationship, which also compiles. The nested class, even when not deriving from the outer class, can access private static members of the outer class.

In the end, my solution here is simple. I make the Target private and move on. But I thought I’d take the opportunity to point out these interesting facets of C# that you probably don’t run across very often.

By

The Hard Switch from Walking to Driving

Have you ever listened to someone describe a process that they follow at work and thought “that’s completely insane!”? Maybe part of their build process involves manually editing sixty different files. Maybe their computer crashes every twenty minutes, so they only ever do anything for about fifteen minutes at a time. Or worse, maybe they use Rational Clear Case. A common element in situations where there’s an expression of disbelief when comparing modus operandi is that the person who calmly describes the absurdity is usually in boiled frog kind of situation. Often, they respond with, “yeah, I guess that isn’t normal.”

But just as often, a curious phenomenon ensues from there. The disbelieving, non-boiled person says, “well, you can easily fix that by better build/new computer/anything but Clear Case,” to which the boiled frog replies, “yeah… that’d be nice,” as if the two were fantasizing about winning the lottery and retiring to Costa Rica. In other words, the boiled frog is unable to conceive of a world where things aren’t nuts, except as a remote fantasy.

I believe there is a relatively simple reason for this apparent breaking of the spirit. Specifically, the bad situation causes them to think all alternative situations within practical reach are equally bad. Have you ever noticed the way during economic downturns people predict gloom lasting decades, and during economic boom cycles pundits write about how we’ve moved beyond–nay transcended–bad economic times? It’s the same kind of cognitive bias–assuming that what you’re witnessing must be the norm.

Model_T_tractorBut the phenomenon runs deeper than simply assuming that one’s situation must be normal. It causes the people subject to a bad paradigm to assume that other paradigms share the bad one’s problems. To illustrate, imagine someone with a twelve mile commute to work. Assuming an average walking speed of three miles per hour, imagine that this person spends each day walking four hours to work and four hours home from work. When he explains his daily routine to you and you’ve had a moment to bug out your eyes and stammer for a second, you ask him why on earth he doesn’t drive or take a bus or…or something!

He ruefully replies that he already spends eight hours per day getting to and from work, so he’s not going to add learning how to operate a car or looking up a bus schedule to his already-busy life. Besides, if eight hours of winter walking are cold, just imagine how cold he’ll be if he spends those eight hours sitting still in a car. No, better just to go with what works now.

Absurd as it may seem, I’ve seen rationale like this from other developers, groups, etc. when it comes to tooling and processes. A proposed switch or improvement is rejected because of a fundamental failure to understand the problem being solved. The lesson to take away from this is to step outside of your cognitive biases as frequently as possible by remaining open to the idea of not just tweaks, but game changers. Allow yourself to understand and imagine completely different ways of doing things so that you’re not stuck walking in an age of motorized transport. And if you’re trying to sell a walking commuter on a new technology, remember that it might require a little of bit of extra prodding, nudging, and explaining to break the trance caused by the natural cognitive bias. Whether breaking through your own or someone else’s, it’s worth it.

By

Test Readability: Best of All Worlds

When it comes to writing tests, I’ve been on sort of a mild, ongoing quest to increase readability. Generally speaking, I follow a pattern of setup, action, verification in all tests. I’ve seen this called other things: given-when-then, etc. But when describing the basic nature of unit tests (especially as compared to integration tests) to people, I explain it by saying “you set the stage, poke it, and see if what happens is what you thought would happen.” This rather inelegant description really captures the spirit of unit testing and why asserts per unit test probably ought to be capped at one as opposed to the common sentiment among first time test writers, often expressed by numbering the tests and having dozens of asserts intermixed with executing code:

I think that was actually the name of a test I saw once: Test_All_The_Things(). I don’t recall whether it included an excited cartoon guy. Point is, that’s sort of the natural desire of the unit testing initiate — big, monolithic tests that are really designed to be end-to-end integration kinds of things where they want to tell in one giant method whether or not everything’s okay. From there, a natural progression occurs toward readability and even requirements documentation.

In my own personal journey, I’ll pick up further along that path. For a long time, my test code was always a monument to isolation, historically. Each method in the test class would handle all of its own setup logic and there would be no common, shared state among the tests. You could pack up the class under test (CUT) and the test method, ship them to Pluto and they would still work perfectly, assuming Pluto had the right version of the .NET runtime. For instance:

There are opportunities for optimization though, and I took them. A long time back I read a blog post (I would link if I remember whose) that inspired me to change the structure a little. The test above looks fine, but what happens when you have 10 or 20 tests that verify behaviors of DoSomething() in different circumstances? You wind up with a region and a lot of tests that start with Do_Something. So, I optimized my layout:

Now you get rid of regioning, which is a plus in my book, and you still have collapsible areas of the code on which you can focus. In addition, you no longer need to redundantly type the name of the code element that you’re exercising in each test method name. A final advantage is that similar tests are naturally organized together making it easier to, say, hunt down and blow away all tests if you remove a method. That’s all well and good, but it fit poorly with another practice that I liked, which was defining a single point of construction for a class under test:

Now, if we decide to add a constructor parameter to our class as we’re doing TDD, it’s a simple change in on place. However, you’ll notice that I got rid of the nested test classes. The reason for that is there’s now a scoping issue — if I want all tests of this class to have access, I have to put it in the outer class, elevate its visibility, and access it by calling MyTestClass.BuildCut(). And for a while, I did that.

But more recently, I had been sold on making tests even more readable by having a simple property called Target that all of the test classes could use. I had always shied away from this because of seeing people who would do horrible, ghastly things in test class state in vain attempts to force the unit test runner to execute their tests sequentially so that some unholy Singleton somewhere would be appeased with blood sacrifice. I tossed the baby with the bathwater — I was too hasty. Look how nicely this cleans up:

Instantiating the CUT, even when abstracted into a method, is really just noise. After doing this for a few days, I never looked back. You really could condense the first test down to a single line, provided everyone agrees on the convention that Target will return a minimally initialized instance of the CUT at the start of each test method. If you need access to constructor-injected dependencies, you can expose those as properties as well and manipulate them as needed.

But we’ve now lost all the nesting progress. Let me tell you, you can try, but things get weird when you try to define the test initialize method in the outer class. What I mean by “weird” is that I couldn’t get it to work and eventually abandoned trying in favor of my eventual solution:

So at the moment, that is my unit test writing approach in .NET. I have not yet incorporated that refinement into my Java work, so I may post later if that turns out to have substantial differences for any reason. This is by no means a one size fits all approach. I realize that there are as many different schemes for writing tests as test writers, but if you like some or all of the organization here, by all means, use the parts that you like in good health.

Cheers!

Acknowledgements | Contact | About | Social Media