DaedTech

Stories about Software

By

How to Disable Controls During Postback in ASP

The other day, I was working on a page in a webforms app where a postback, triggered by a button click, kicked off a bit of processing that would run from 10-20 seconds. While this is going on, it makes sense to disable the clicked button and other controls, for that matter. Since the processing occurs on the server, the only way to achieve this effect is by disabling the buttons and other controls on the client side, by using javascript. The following is the series of steps leading up to getting this right. If you just want to see what worked, you can skip to the end.

The first thing I did was find a bit of jquery that would disable things on the page. I put this into the user control in which I was doing this:


From there, I found that the way to distinguish between a server-side click handler (“OnClick” property) and a client-side one was to use OnClientClick, like so:


Here we have some standard button boilerplate, the server side event handler “SearchButton_Click” and the new OnClientClick that triggers javascript invocation and our jquery implementation. I was pretty pumped about this and ready to have my search button disable all client side controls and disable them until the server returned a response. I fired it up, clicked the search button, and absolutely nothing happened. Not only was nothing disabled, but there was no postback. After some googling around, someone recommended adding “return true;” after the disableOnPostback() call. Apparently any intervening client side handler not returning true is assumed to return false which stops the postback. So here is the new attempt:


This had no discernible effect, and after some searching, I found that the meat of the issue here is that disabling the button apparently also disables its ability to trigger a postback. We need to tell the button to fire the postback regardless, which apparently can be accomplished with UseSubmitBehavior=false as a property.


I tried this and, finally, something different! Only problem was that it was a partial success. The disabling of controls finally worked, but the postback never happened. On a hunch, I took out the return true and arrived at my final answer:


This combined with the jquery at the top of the page did the trick. So if you have a button that triggers a postback with a lengthy operation and you want to disable all controls until the operation completes and returns a response, this should do the trick. I am not yet an expert in under-the-covers webforms particulars, so the theory is still a little hazy on my end, but hopefully this helps anyone in a similar position to me. Also, if you are an expect in this stuff, please feel free to weigh in on the theory at play here.

On final thing that I’ll mention is that I did find something called Postback Ritalin during my searches. This seems to offer a control to take care of this for you, though I didn’t really want to introduce any third party dependencies, so I didn’t try anything with it myself.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Discoverability Instead of Training and Manuals

Documentation and Training as Failures

Some time back, I was listening to someone explain the finer points of various code that he had written when he lamented the lack of documentation and training available for prospective users of this code. I thought to myself rather blithely and flippantly, “why – just write the code so that documenting it and training people to use it aren’t necessary.” I attributed this to being in a peevish mood or something, but reflecting on this later, I thought earnestly, “snarky Erik is actually right about this.”

Think about the way software development generally goes, especially if you’re developing code to server as a framework or utility for teammates and other developers. You start off with clean code and good intentions and you hammer away at making some functional software. Often things go well, but here and there you hit snags and you do a bit of duct-taping and work-around-ing (working around?), vowing to return later to straighten things out. Sometimes you do just that, but other times you realize that time and budget are finite resources for the effort and you reconcile with shipping something that’s not quite perfect.

But you don’t just ship something imperfect, because you’re diligent and responsible. What do you do instead? You go into those nasty areas of the code and you write inline comments, possibly containing apologies. You make sure that the XML/Java doc comments above the methods/classes are quite thorough as well and, for good measure, you probably even writeup some kind of manual or Word document, perhaps with a Visio diagram. Where the code is clear, you let it speak for itself and where it’s less than clear, you document.

We could put this another, perhaps more blunt way: “we generally try to write clean code and we document when we fail to do so.” We might reasonably think of documentation as something that we do when our work and intentions fail to speak for themselves. This seems a bit iconoclast in the face of conventional methods of communicating and processing information. I grew up as a programmer reading through the “man pages” to understand all manner of *Nix command line utilities, system calls, etc. I learned the nitty-gritty of how concepts like semaphores, and IPC and threading worked in this fashion so it seems a bit blasphemous, even to me, to accuse the authors of these APIs at failing to be clear or, really, failing in any way.

And yet, here we are. To be clear, I don’t think that writing code for which clients need to read manuals is a failure of design or of correctness or of a project or utility on the whole. But I do think it’s a failure to write self documenting code. And I think that for decades, we’ve had a culture in which this wasn’t viewed as a failure of any kind. What are we chided to do when we get a new appliance or gadget? Well, read the manual. There’s even an iconic acronym of exasperation for people who don’t do so prior to asking questions: RTFM. In the interest of keeping the blog’s PG rating, I won’t say here what it stands for. In this culture, the engineering particulars and internal mechanisms of things have been viewed as unknowable mysteries and the means by which communication is offered and understanding reached is large and often formidable manuals with dozens of pages of appendices, notes, and works cited. But is that really the best way to do things in all cases? Aren’t there times where it might be a lot better to make something that screamed how it should be used instead of wasting precious time?

Lifejacket_Instructions

Image courtesy of “AlMare” via Wikimedia Commons

A Changing Culture

An interesting thing has happened in recent years, spurred on largely by Apple, initially, and now I’d say by the mobile computing movement in general, since Google and Microsoft have followed suit in their designs. Apple made it cool to toss the manual and assume that it is the responsibility of the maker of the thing, rather than the user, to ensure that understanding is reached. In the development world, champions of clean, self-documenting code have existed prior to whatever Apple might have been doing in the popular market, but the concept certainly got a large, public boost from Apple and its marketing cachet and those who subsequently got on board with the movement.

Look at the current state of applications being written. This fall, I had the privilege of attending That Conference, Dotnet Rocks Edition and seeing Lwin Maung speak about mobile concepts and the then soon-to-be-released Windows 8 and its app ecosystem. One of the themes of the talk was how apps informed you of how to use them in intuitive ways. You didn’t read a manual to know that the news app had additional content — it told you by leaving the next story link halfway off the side of the screen, practically begging you to paw at it and scroll to the side. The idea of Windows with lots of headers at the top from which you can drill hierarchically into the application is gone and being replaced instead by visual cues that are borderline impossible to screw up.

As this becomes popular in terms of user experience, I submit that it should also become popular with software development. If you find yourself writing some method with the signature DoStuff(bool, bool, int, bool, string, bool) you’ll probably (hopefully) think “man, I better document this because no one will ever figure it out.” But I ask you to take it a step further. If you have the time to document it, then why not spend that time fixing it instead of explaining yourself through documentation? Rename DoStuff to describe exactly what stuff it does, make the parameters significantly fewer, get rid of the Booleans, and make it something that’s pretty much impossible to misunderstand, like string.RemoveCharactersFromTheEnd(6). I bet you don’t need multiple appendices or even a manual to figure out what that does.

Please note that I’m not suggesting that we toss out all previous ways of doing things or stop documenting altogether. Documentation certainly has a time and a place and not all products or APIs are ones that lend themselves to being completely discoverable. What I am suggesting is that we change our culture as developers from “RTFM!!!!” to “could I have made that clearer?” We’ve come a long way as the discipline of programming matures and we have more and more stakeholders who are less and less technical depending on us for more and more things. Communication is increasingly important and communication on clear, broadly understandable terms at that. You’re no longer writing methods being consumed by a handful of fellow geeks that are using your code to put together a BBS about how to program in COBOL. You’re no longer writing code where each byte of memory and disk space is precious so it’s far better to be verbose in voluminous manuals than method or variable names. You’re (for the most part) no longer writing code where optimizing a few cycles trumps readability. You’re writing code in a time when terms like “agile” and “maintainable” reign supreme, there’s no real cost to self-describing code, and the broader popular in general expect their technology to be discoverable. It’s a great time to be a developer — embrace it.

By

Scoping And Accessibility Quirks in C#

As I mentioned recently, I’ve taken to using an inheritance scheme in my approach to unit testing. Because of the mechanics of this scheme, making a class under test internal this morning brought to light two relatively obscure properties of scoping and visibility in C# that you might not be aware of:

  1. Internal can be “less visible” than protected.
  2. Private isn’t always private.

Let me explain by showing the situation in which I found myself. As part of an open source project I’m working on at the moment to allow SQL-like querying of Autotask data through its API, I’ve been writing a set of tests on a class called “SqlQuery” in which I take a SQL statement and parse out the parts I’m interested in:

[TestClass]
public class SqlQueryTest
{
    protected SqlQuery Target { get; set; }

    [TestInitialize]
    public void BeforeEachTest()
    {
        Target = new SqlQuery("SELECT id FROM Account");
    }

    [TestClass]
    public class Columns : SqlQueryTest
    {
        [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
        public void Contains_One_Element_For_One_Selected_Column()
        {
            Assert.AreEqual(1, Target.Columns.Count());
        }
...

Up until now the class under test, SqlQuery, has been public, but I realize that this is an abstraction that only matters in the actual lower layer assembly rather than at the GUI level, so I made it internal and added an InternalsVisibleTo to the properties of the assembly under test. With that in place, I downgraded the SqlQuery class to internal and was momentarily surprised by a compiler error of “Inconsistent accessibility: property type ‘AutotaskQueryService.SqlQuery’ is less accessible than property ‘AutotaskQueryServiceTest.SqlQueryTest.Target'”.

KoalaWat

On its face, this seems crazy — “internal” is less accessible than “protected”? But when you think about it, this actually makes sense. “Internal” means “nobody outside of this assembly can see it” and protected means “nobody except for this class and its inheritors can see it.” So what happens if I create a third assembly and declare a class in it that inherits from SqlQueryTest? This class has no visibility to the assembly under test and its internals, but it would have visibility to Target. Hence the strange-seeming but quite correct compiler error. One way to get rid of this error is to make SqlQueryTest internal, and that actually compiled and all tests ran, but I don’t like that solution in the event that I want tests in that class and not just its nested children. I decided on another option: making Target private.

If you look at the code snippet above, are you now thinking “but that won’t compile!” After all “Columns” inherits from SqlQueryTest and uses Target and I’ve now just made Target private, so Columns will lose access to it. Well, no, as it turns out. The private scoping in a class means that only the things between the {} of the class can see it. Our nested class here happens to be one of those things. So the scoping trumps the hierarchy in this instance. This can easily be confirmed by changing Target to static and removing the inheritance relationship, which also compiles. The nested class, even when not deriving from the outer class, can access private static members of the outer class.

In the end, my solution here is simple. I make the Target private and move on. But I thought I’d take the opportunity to point out these interesting facets of C# that you probably don’t run across very often.

By

The Hard Switch from Walking to Driving

Have you ever listened to someone describe a process that they follow at work and thought “that’s completely insane!”? Maybe part of their build process involves manually editing sixty different files. Maybe their computer crashes every twenty minutes, so they only ever do anything for about fifteen minutes at a time. Or worse, maybe they use Rational Clear Case. A common element in situations where there’s an expression of disbelief when comparing modus operandi is that the person who calmly describes the absurdity is usually in boiled frog kind of situation. Often, they respond with, “yeah, I guess that isn’t normal.”

But just as often, a curious phenomenon ensues from there. The disbelieving, non-boiled person says, “well, you can easily fix that by better build/new computer/anything but Clear Case,” to which the boiled frog replies, “yeah… that’d be nice,” as if the two were fantasizing about winning the lottery and retiring to Costa Rica. In other words, the boiled frog is unable to conceive of a world where things aren’t nuts, except as a remote fantasy.

I believe there is a relatively simple reason for this apparent breaking of the spirit. Specifically, the bad situation causes them to think all alternative situations within practical reach are equally bad. Have you ever noticed the way during economic downturns people predict gloom lasting decades, and during economic boom cycles pundits write about how we’ve moved beyond–nay transcended–bad economic times? It’s the same kind of cognitive bias–assuming that what you’re witnessing must be the norm.

Model_T_tractorBut the phenomenon runs deeper than simply assuming that one’s situation must be normal. It causes the people subject to a bad paradigm to assume that other paradigms share the bad one’s problems. To illustrate, imagine someone with a twelve mile commute to work. Assuming an average walking speed of three miles per hour, imagine that this person spends each day walking four hours to work and four hours home from work. When he explains his daily routine to you and you’ve had a moment to bug out your eyes and stammer for a second, you ask him why on earth he doesn’t drive or take a bus or…or something!

He ruefully replies that he already spends eight hours per day getting to and from work, so he’s not going to add learning how to operate a car or looking up a bus schedule to his already-busy life. Besides, if eight hours of winter walking are cold, just imagine how cold he’ll be if he spends those eight hours sitting still in a car. No, better just to go with what works now.

Absurd as it may seem, I’ve seen rationale like this from other developers, groups, etc. when it comes to tooling and processes. A proposed switch or improvement is rejected because of a fundamental failure to understand the problem being solved. The lesson to take away from this is to step outside of your cognitive biases as frequently as possible by remaining open to the idea of not just tweaks, but game changers. Allow yourself to understand and imagine completely different ways of doing things so that you’re not stuck walking in an age of motorized transport. And if you’re trying to sell a walking commuter on a new technology, remember that it might require a little of bit of extra prodding, nudging, and explaining to break the trance caused by the natural cognitive bias. Whether breaking through your own or someone else’s, it’s worth it.

By

Test Readability: Best of All Worlds

When it comes to writing tests, I’ve been on sort of a mild, ongoing quest to increase readability. Generally speaking, I follow a pattern of setup, action, verification in all tests. I’ve seen this called other things: given-when-then, etc. But when describing the basic nature of unit tests (especially as compared to integration tests) to people, I explain it by saying “you set the stage, poke it, and see if what happens is what you thought would happen.” This rather inelegant description really captures the spirit of unit testing and why asserts per unit test probably ought to be capped at one as opposed to the common sentiment among first time test writers, often expressed by numbering the tests and having dozens of asserts intermixed with executing code:

I think that was actually the name of a test I saw once: Test_All_The_Things(). I don’t recall whether it included an excited cartoon guy. Point is, that’s sort of the natural desire of the unit testing initiate — big, monolithic tests that are really designed to be end-to-end integration kinds of things where they want to tell in one giant method whether or not everything’s okay. From there, a natural progression occurs toward readability and even requirements documentation.

In my own personal journey, I’ll pick up further along that path. For a long time, my test code was always a monument to isolation, historically. Each method in the test class would handle all of its own setup logic and there would be no common, shared state among the tests. You could pack up the class under test (CUT) and the test method, ship them to Pluto and they would still work perfectly, assuming Pluto had the right version of the .NET runtime. For instance:

[TestClass]
public class MyTestClass
{
     [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
     public void Do_Something_Returns_True()
     {
          var classUnderTest = new ClassUnderTest(); //Setup

          bool actionResult = classUnderTest.DoSomething(); //Poke

          Assert.IsTrue(actionResult); //Verify
     }
}

There are opportunities for optimization though, and I took them. A long time back I read a blog post (I would link if I remember whose) that inspired me to change the structure a little. The test above looks fine, but what happens when you have 10 or 20 tests that verify behaviors of DoSomething() in different circumstances? You wind up with a region and a lot of tests that start with Do_Something. So, I optimized my layout:

[TestClass]
public class MyTestClass
{
     [TestClass]
     public class DoSomething
     {
          [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
          public void Returns_True()
          {
               var classUnderTest = new ClassUnderTest(); //Setup

               bool actionResult = classUnderTest.DoSomething(); //Poke

               Assert.IsTrue(actionResult); //Verify
          }

          [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
          public void Returns_False_When_Really_Is_False()
          {
               var classUnderTest = new ClassUnderTest() { Really = false }; //Setup

               bool actionResult = classUnderTest.DoSomething(); //Poke

               Assert.IsFalse(actionResult); //Verify
          }
     }
}

Now you get rid of regioning, which is a plus in my book, and you still have collapsible areas of the code on which you can focus. In addition, you no longer need to redundantly type the name of the code element that you’re exercising in each test method name. A final advantage is that similar tests are naturally organized together making it easier to, say, hunt down and blow away all tests if you remove a method. That’s all well and good, but it fit poorly with another practice that I liked, which was defining a single point of construction for a class under test:

[TestClass]
public class MyTestClass
{
     private ClassUnderTest BuildCut(bool really = false)
     {
          return new ClassUnderTest() { Really = really };
     }

     [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
     public void Returns_True()
     {
          var classUnderTest = BuildCut(); //Setup

          bool actionResult = classUnderTest.DoSomething(); //Poke

     Assert.IsTrue(actionResult); //Verify
     }

     [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
     public void Returns_False_When_Really_Is_False()
     {
          var classUnderTest = BuildCut(false); //Setup

          bool actionResult = classUnderTest.DoSomething(); //Poke

          Assert.IsFalse(actionResult); //Verify
     }
}

Now, if we decide to add a constructor parameter to our class as we’re doing TDD, it’s a simple change in on place. However, you’ll notice that I got rid of the nested test classes. The reason for that is there’s now a scoping issue — if I want all tests of this class to have access, I have to put it in the outer class, elevate its visibility, and access it by calling MyTestClass.BuildCut(). And for a while, I did that.

But more recently, I had been sold on making tests even more readable by having a simple property called Target that all of the test classes could use. I had always shied away from this because of seeing people who would do horrible, ghastly things in test class state in vain attempts to force the unit test runner to execute their tests sequentially so that some unholy Singleton somewhere would be appeased with blood sacrifice. I tossed the baby with the bathwater — I was too hasty. Look how nicely this cleans up:

[TestClass]
public class MyTestClass
{
     private ClassUnderTest Target { get; set; }

     [TestInitialize]
     public void BeforeEachTest()
     {
          Target = new ClassUnderTest();
     }

     [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
     public void Returns_True()
     {
          //Setup is no longer necessary!

          bool actionResult = Target.DoSomething(); //Poke

          Assert.IsTrue(actionResult); //Verify
     }

     [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
     public void Returns_False_When_Really_Is_False()
     {
          Target.Really = false; //Setup

          bool actionResult = Target.DoSomething(); //Poke

          Assert.IsFalse(actionResult); //Verify
     }
}

Instantiating the CUT, even when abstracted into a method, is really just noise. After doing this for a few days, I never looked back. You really could condense the first test down to a single line, provided everyone agrees on the convention that Target will return a minimally initialized instance of the CUT at the start of each test method. If you need access to constructor-injected dependencies, you can expose those as properties as well and manipulate them as needed.

But we’ve now lost all the nesting progress. Let me tell you, you can try, but things get weird when you try to define the test initialize method in the outer class. What I mean by “weird” is that I couldn’t get it to work and eventually abandoned trying in favor of my eventual solution:

[TestClass]
public class MyTestClass
{
     protected ClassUnderTest Target { get; set; }

     [TestInitialize]
     public void BeforeEachTest()
     {
          Target = new ClassUnderTest();
     }

     [TestClass]
     public class DoSomething : MyTestClass
     {
          [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
          public void Returns_True()
          {
               //Setup is no longer necessary!

               bool actionResult = Target.DoSomething(); //Poke

               Assert.IsTrue(actionResult); //Verify
          }

          [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
          public void Returns_False_When_Really_Is_False()
          {
               Target.Really = false; //Setup

               bool actionResult = Target.DoSomething(); //Poke

               Assert.IsFalse(actionResult); //Verify
          }
     }
}

So at the moment, that is my unit test writing approach in .NET. I have not yet incorporated that refinement into my Java work, so I may post later if that turns out to have substantial differences for any reason. This is by no means a one size fits all approach. I realize that there are as many different schemes for writing tests as test writers, but if you like some or all of the organization here, by all means, use the parts that you like in good health.

Cheers!