DaedTech

Stories about Software

By

Incorporating MS Test unit tests without TFS

My impression (and somebody please correct me if I’m wrong) is that MS Test is really designed to operate in conjunction with Microsoft’s Team Foundation Server (TFS) and MS Build. That is, opting to use MS Test for unit testing when you’re using something else for version control and builds is sort of like purchasing one iProduct when the rest of your computer-related electronics are PC or Linux oriented: you can do what you like with it, provided you do enough tinkering, but in general, the experience is not optimal.

As such, I’m posting here the sum of that tinkering that I think I have been able to parlay into an effective build process. I am operating in an environment where unit test framework, version control, and build technology are already in place, and mine is only to create a policy to make them work together. So, feedback along the lines of “you should use NUnit” is appreciated, but only because I appreciate anyone taking the time to read the blog: it won’t actually be helpful or necessary in this circumstance. MS Test is neither my choice, nor do I particularly like it, but it gets the job done and it isn’t going anywhere at this time.

So, onto the helpful part. Since I’m not using MS Build and I’m not using TFS, I’m more or less restricted to running the unit tests in two modes: through Visual Studio or from the command line using MSTest.exe. If there is way to have a non-MS Build tool use Visual Studio’s IDE, I am unaware of it, and, if it did exist, I’d probably be somewhat skeptical (but then again, I’m a dyed-in-the-wool command line junkie, so I’m not exactly objective).

As such, I figured that the command line was the best way to go and looked up the command line options for MS Test. Of particular relevancy to the process I’m laying out here are the testcontainer, category, and resultsfile switches. I also use the nologo switch, but that seems something of a given since there’s really no reason for a headless build machine to be advertising for Microsoft.

Testcontainer allows specification of a test project DLL to use. Resultsfile allows specification of a file to which the results can be dumped in xml format (so my advice is to append .xml to the end). And the most interesting one, category, allows you to filter the tests based on some meta-data that is defined in the attribute header of the test itself. In my instance, I’m using three possible categories to describe tests: proven, unit, and integration.

The default when you create a test in Microsoft using, say, the Visual Studio code snippet “testc” (type “testc” and then hit tab) is the following:

Excusing the horrendous practice of testing whether true is true, you’ll notice that the attribute tags are empty. This is what we want because this test has not yet been promoted to be included with the build. The first thing that I’ll do is add a tag to it for “Owner” because I believe that it’s good practice to sign unit tests, thus allowing everyone to see who is the owner of a failing unit test, as a good contact point for investigating why the test is broken. This is done as follows:

I’ve signed this one as somebody else because I’m not putting my name on it. But, when you’re not kidding around or sandbagging someone for your badly written test, you probably want to include your actual name.

The next step is the important one in which you assign the test a category or multiple categories, as applicable. In my scenario, we can pick from “unit,” “integration,” and “proven.” “Unit” is assigned to tests that actually exercise only the class under test. “Integration” is assigned to tests that test the interaction between two or more classes. “Proven” means that you’re confident that if the test is broken, it’s because the SUT is broken and not just that the test is poorly written. So, I might have the following:

Now, looking at this set of tests, you’ll notice a couple of proven tests: one integration, one unit, and a test that is missing the label “Proven.” With the last test, we leave off the label proven because the class under test has global state and is thus going to be unpredictable and hard to test. Also, with that one, I’ve labeled it integration instead of unit because I consider anything referencing global state to be integration by definition. (Also, as an aside, I would not personally introduce global, static state into a system, nor would I prefer to test classes in which it exists, but as anyone knows, not all of the code that we have to deal with reflects our design choices or is our own creation.)

Now, for the build process itself, I’ve created the following batch script:

What this does is iterate through the current directory, looking for files that end in Test.dll. As such, it should either be modified or placed in the directory to which all unit test projects are deployed as part of the build. For each Test project that it finds, it runs MS Test, applies the category filter from the top, and dumps the results in a file named Results.xml. In this case, it will run all tests categorized as “Unit” and “Proven.” However, this can be easily modified by changing the filter parameter per the MSTest.exe specifications for the /category command line parameter (supports operations logic and/or/not).

So, from here, incorporating the unit tests into the build will depend to some degree on the nature of your build technology, but it will probably be as simple as parsing the command output from the batch script, parsing the results.xml file, or simply getting a return parameter from the MS Test executable. Some tools may even implicitly know how to handle this.

As I see it, this offers some nice perks. It is possible to allow unit tests to remain in the build even if they have not been perfected yet, and they need not be marked as “Inconclusive” or commented out. In addition, it is possible to have more nuanced build steps where, say, the unit tests are run daily by the build machine, but unproven tests only weekly. And in the event of some refactoring or changes, unit tests that are broken because of requirements changes can be “demoted” from the build until such time as they can be repaired.

I’m sure that some inventive souls can take this further and do even cooler things with it. As I refine this process, I may revisit it in subsequent posts as well.

By

Version Control Beyond Code

One of my favorite OSS tools is Subversion. I’ve used it a great deal professionally for managing source code, and it’s also the tool of choice for my academic collaborations between students not located on campus (which describes me, since my MS program is an online one). It seems a natural progression from CVS, and I haven’t really tried out GIT yet, so I can’t comment as to whether or not I prefer that mode of development.

However, I have, over the last few years, taken to making use of subversion for keeping track of my documents and other personal computing items at home, and I strongly advocate this practice for anyone who doesn’t mind the setup overhead and sense of slight overkill. Before I describe the subversion setup, I’ll describe my situation at home. I have several computers that I use for various activities.  These computers include a personal desktop, a personal netbook, sometimes a company laptop, and a handful of playpen machines running various distros of Linux. I also have a computer that I’ve long since converted into a home server — an old P3 with 384 megs of RAM running Fedora. Not surprisingly, this functions as the subversion server.

One of the annoyances of my pre-personal-subversion life was keeping files in sync. I saw no reason that I shouldn’t be able to start writing a document on my desktop and finish it on my laptop (and that conundrum applies to anyone with more than one PC, rather than being specific to a computer-hoarding techie like me). This was mitigated to some degree by setting up a server with common access, but it was still sort of clunky.

So, I decided to make use of subversion for keeping track of things. Here is a list of advantages that I perceive to this approach:

  • Concurrent edits are not really an issue
  • Creates a de facto backup scheme, since subversion stores the files in its repository and requires them to exist on at least one additional machine for editing
  • Combined with tortoise client for windows, allows you to see which folders/files have been edited since you ‘saved’ changes to the repository Modified subversion folder
  • You can delete local copies of the files and then get them back again by running an update — handy for when you want to work on collaborated stuff but not take up local space. This beats the central storage model, particularly with a laptop, because you can work on a file not on your home network without missing a beat
  • You have a built-in history of your files and can revert to any previous version you like at any time. This is useful for backing up something like Quicken data files that continuously change. Rather than creating hundreds of duplicate files to log your progress over time, you just worry about one and let subversion handle the history.
  • You can easily create permissions as to who has access to what without worrying about administering windows DNS, workgroups, fileshares, and other assorted IT minutiae.
  • Whether you were originally or not, this gives you bonafide experience as an SVN administrator

On the other hand, there are some disadvantages, though I don’t consider them significant:

  • Requires riding the SVN admin learning curve.
  • Requires a server for the task (practically anyway — you can always turn a PC into a SVN server with the file based mode, but I’m not a huge fan)
  • Can be overkill if you don’t have a lot of files and/or sharing going on

So, I would say that the disadvantages apply chiefly to those unfamiliar with SVN or without a real need for this kind of scheme. Once it’s actually in place, I’m hard pressed to think of a downside, and I think you’ll come to find it indispensable.

To set this up, you really only need a server machine and TortoiseSVN (windows) or a subversion client for Linux. I won’t go into the details of server setup with this post, but suffice it to say that you set up the server, install the clients, and you can be off and running. If there is some desire expressed in comments or I get around to it, I can put up another post with a walkthrough of how to set up the server and/or the clients. Mine runs over the HTTP protocol, and I find this to be relatively robust compared to the file protocol and non-overkill compared to the secure protocol involving keys. (Since this is a local install and my wireless network is encrypted with WPA-PSK, I’m not really worried about anyone sniffing the transfers.)

By

Adding CodeRush Templates

Today I’m going to briefly describe one of the cool and slightly more advanced features of CodeRush in a little more detail. But first, a bit of background. One of the many things I found enjoyable about CodeRush is the templated shortcuts, ala VS code snippets but better. I found myself typing “tne-space” a lot for generating:

throw new Exception("");

with my caret placed in the quotes within the exception. However, I would dutifully go back and modify “Exception” so that I wasn’t throwing a generic exception and arousing the ire of best-practices-adherents everywhere. That rained on my parade a bit, as I found the time savings not to be optimized.

I decided that I’d create specific templates for the exceptions that I commonly used, and I am going to document that process here in case anyone may find it helpful. This is a very simple template addition and probably a good foray into creating your own CodeRush templates.

The first thing to do is fire up Visual Studio and launch the CodeRush options, which, in the spirit of CodeRush, can be short-cutted by using Ctrl-Alt-Shift-O. From here, you can select “Editor” from the main menu and then select “Templates.” This will bring up the templates sub-screen:
CodeRush templates

From here, you can either search for “tne” or navigate to “Program Blocks -> Flow -> tne”:

Create duplicate

Now, you will be prompted for a name. Call it “tnioe” for “throw new InvalidOperationException(“”);” (You can call it whatever you prefer to type in.) Next, in the “Expansion” frame, change “Exception” to “InvalidOperationException” and click “Apply.”

New Template

Now, when you exit the options window and type “tnioe-space” in the editor, you will see your new template.

As a bonus, I’m going to describe something that I encountered and was driving me nuts before I figured out how to fix it. CodeRush’s options screen remembers where you last were in its navigation tree for the next time you launch it. However, it is possible somehow to lose the actual view of the main tree and get stuck in whatever sub-options page you were in without being able to get back.

To fix this, go to DevExpress->about and click on the “Settings” button. This will open a folder on your drive containing settings XML files. Close the options window in Visual Studio and then open OptionsDialog.xml. Set the Option with name attribute “SplitOpen” to “True” and you’ll have your normal, sane options back.

By

The Pleasure of Using CodeRush

For some months now, I’ve been using CodeRush/DevExpress express (read: free) version, and I’ve just recently upgraded to the paid version. After playing with it for only a few short weeks, I’ve come to find it indispensable, so I thought I’d log a post highlighting some of my favorite features. These are relevant to C# 3.5 and up in Visual Studio 2010.

  • The “Issues List” is one of the first features that you might notice if you don’t turn it off because it puts various squigglies under things that it flags as issues. This includes dead code, overly-complicated methods, expressions that could be converted to lambdas, etc. This facet of the utility has immediate payoff in that it may make you aware of some coding practices that you hadn’t previously thought were issues. For example, I had squigglies every time I had something along the lines of:

    Foo myFoo = new Foo();
    CodeRush informed me that I could (should) use var instead of the declarative Foo. This took some getting used to for me, since the “var” looks suspiciously like weak typing and reminds me of Visual Basic 6.0 (which I do not count amount my favorite languages) and its “dim.” However, I see the logic in it. Declaring a Foo twice is needlessly verbose and redundant, so CodeRush has been helping me conform to what are, evidently, now best practices.

  • Ctrl-3 to encapsulate selected text in a region. This is simple but cool. I’ve got a bunch of functions or properties that I want to put into a region and I can just hit Ctrl-3, type the region name, and it’s done. (We can discuss the merits or lack thereof regarding regioning somewhere else, but for the time being I’ll simply state that it’s the convention in which I’m using CodeRush).
  • “mv”-space. This guy creates a new private void method and highlights the method name. So, you type mv-space, type the name of the method, hit enter, type your parameters, hit enter, and you’re off. Also, if you don’t like the method as private, you can hit alt-up/alt-down and cycle the visibility of the method.
  • “tne”-space. (In general, {expr}-space is the CodeRush paradigm for executing ‘templates,’ which are similar but more sophisticated than code snippets in Visual Studio.) This creates the code:

    throw new Exception("");
    and it places your cursor inside the quotes. I think that the default is to have it predicated by String.Format, but it’s seemed to learn that I’m more likely to throw exceptions with no variables than I am to include them in a formatted string.

  • NumberPad +/- highlights growing/shrinking amounts of text, respectively. So, if your mouse is on a string inside a method call that’s inside a method inside a region, continuously hitting + will highlight the string, then the method parameters, then the method call, then the whole method that you’re in, then the whole region, then the class. Minus will take you back down again. Very handy.
  • Tab to next reference. If you place the cursor over an identifier and start hitting tab, you can cycle through instances of the identifier in the code. Hitting escape will take you back to where you started. This is a huge improvement over Ctrl-K, R (find all references) in Visual Studio, which displays in a different window.
  • Smart Cut. One of the things I very much enjoy is being able to put my cursor at the beginning of a line, hit Ctrl-C, and then paste the line. There are a lot of other features of Smart Copy as well, but that one really sticks out for me as handy. Generally speaking, I try to avoid the mouse as much as possible, and this is a help.
  • Camel Case Navigation. Much like Ctrl-Arrow moves in chunks and Ctrl-Shift-Arrow highlights words in chunks, using the same while also holding down Alt navigates that way through CamelCaseWords.

By

App Development Strategy

At the moment, I own an Android phone and an IPod Touch.  I do a lot of work on home automation and have begun to integrate both devices into what I do, envisioning them as essentially remote controls for operating the various automated appliances and articles in my house.   Presently, this is done using the browsers on both devices, but I thought it couldn’t hurt to dip my toe into the waters of “app” development to better understand how to leverage those technologies.  I don’t, personally, think that the notion of “apps” will continue to be as en vogue over the next decade, as we’ve done this dance before in the late 90’s with shrinkwrap software on the PC versus web applications, but I digress. If downloadable applications for the phone are funneled toward the phone browser like desktop applications to the desktop browser, that’s at least a few years out and will be heavily influenced by the current state of today’s apps.

Because my IPod was closer when I decided to play around with app development, I decided to set up to write apps for it first.  I was and am not interested in publishing to Apple’s App store, but since I have jail-broken the device, I just wanted to run my code on it alone.  I was surprised to find out that Apple has made no provisions whatsoever to allow app developers to use any sort of development environment outside of the Mac suite.  That is to say, Apple’s official stance appears to be that if you are interested in developing apps for IDevices (including just your own), you need to pay Apple $100 per year for membership, and, assuming you don’t own a Mac (I don’t — all of my computers run Windows or various flavors of Linux), you need to purchase one.  For those keeping track at home, that means that a developer would need to pay at least $700 for the privilege of enriching the device experience.

Apparently, I’m not the only coder whose reaction to this was something short of sprinting to the nearest Apple Store waving my credit card. This site offers five work-arounds for the limitation. Dragonfire offers a pay-to-play system that will set you back a much more reasonable $50. No doubt, there are others as well.

None of that really appealed to me, so I put my IPod back in its charging spot and pulled out my Android phone. The experience was a full 180. I googled “develop android apps” or some such thing, and the first site that came up was the one offering a free download of the Android SDK, asking me whether I wanted to use Windows, Linux, or Mac, and then providing detailed instructions as to how to setup the IDE and get started. So, I did all of the above and shortly had my first real, live app running on my Android phone.

Now, I have my own opinions about various technologies, companies, and practices, but the purpose of this blog is not to engage in the typical “fanboy” debates, proselytize, or anything of that nature. I am generally pretty agnostic in such discussions and willing to use whatever gets the job done. So, what I’m saying here isn’t a knock on Apple or their products, but rather an explanation of why I find this disparity between accommodating developers to be rather curious.

In the early 2000’s, I was fresh out of college and struggling to find a job in the .com bubble burst. Anxious to keep my skills relevant, I decided to write some code on Windows XP, using Visual C++ 6.0. Much to my chagrin as an unemployed kid, I learned it would cost me at least $300 that I didn’t have. My solution? I formatted half of my Windows hard drive and dual booted Linux, where I did all of my development with GCC and friends for free.

A lot of people went this route–so many, in fact, that Java took off, and Linux took a huge bite out of Windows’ domination on the server front. Nothing gets an operating platform moving faster than a lot of people creating software for it, and this ushered in a Linux golden age of sorts. Granted, Linux isn’t rivaling Windows for end-user desktops by any stretch, but it’s a lot more prevalent than it would have been had Microsoft not discouraged developers from writing software to run on its OSs.

Microsoft tacitly recognized its former stance as a mistake and introduced developer express versions of Visual Studio, allowed open source plugins to the same, and generally made developing Windows applications a pleasure rather than an expensive chore. As a result, C# has gone from being Microsoft’s cute imitation of Java to a bona-fide and robust development option with substantial market share.

Fast forward to the present, and Apple seems to be imitating Microsoft’s blunder. And that’s what I find curious. To make matters more interesting, Microsoft did this when they had a monopoly on the desktop. Apple is doing it without even a majority on the smart phone. I understand that the strategy might be to boost Mac sales, but at what cost? It’s now established that alienating developers can give a toehold to otherwise irrelevant competitors. So, what happens in the long term when you alienate developers while already having very viable competitors–competitors who, I might add, are welcoming them with open arms?

I personally find it somewhat annoying that I can’t write apps for my IPod without buying hardware or software, but I don’t presume to think that this matters to anyone but me. But, it doesn’t really even matter to me all that much. I’ll just write apps for the Android and use my IPod’s browser in the future. And so will others. Android’s marketplace of apps will grow as quickly and robustly as the wild world allows while Apple’s grows as quickly as Apple’s current cachet allows. And history has shown that an operating platform is only as good as the software written for it. As the developers go, so goes the product. And Apple, in my opinion, would be wise to stop letting the developers go.

Acknowledgements | Contact | About | Social Media