Stories about Software


Tribal Knowledge

In my last post, I alluded briefly to the concept of “tribal knowledge” when developing software. I’ve heard this term defined in various contexts, but for the sake of discussion here, I’m going to define this as knowledge about how to accomplish a task that is not self-evident or necessarily intuitive. So, for instance, let’s say that you have a front door with a deadbolt lock. If you hand someone else the key and they are able to unlock your door, the process of unlocking the door would be general knowledge. If, however, your door is “tricky” and they have to jiggle the key twice and then turn it to the left before turning it back right, these extra steps are “tribal knowledge.” There is no reasonable way to know to do this except by someone telling you to do it.

Today, I’m going to argue that eliminating as much tribal knowledge as possible from a software development process is not just desirable, but critically important.


Often times when you join a new team, you’ll quickly become aware of who the “heroes” or “go-to people” are. They might well have interviewed you, and, if not, you’ll quickly be referred to or trained by them. This creates a kind of professor/pupil feeling, conferring a certain status on our department hero. As such, in the natural realpolitik of group dynamics, being the keeper of the tribal knowledge is a good thing.

As a new team member, the opposite tends to be true and your lack of tribal knowledge is a bad thing. Perhaps you go to check in your first bit of code and it reads something like this:

You ask for a code review, and a department hero takes one look at your code and freaks out at you. “You can never call MakeABar before MakeABaz!!! If you do that, the application will crash, the computer will probably turn off, and you might just flood the Eastern Seaboard!”

Dully alarmed, you make a note of this and underline it, vowing never to create Bars before Bazes. You’ve been humbled, and you don’t know why. Thank goodness the Keeper of the Tribal Knowledge was there to prevent a disaster. Maybe someday you’ll be the Keeper of the Tribal Knowledge.

The Problem

Forgetting any notions of politics, seniority, or the tendency for newer or entry level people to make mistakes, the thing that should jump out at you is, “How was the new guy supposed to know that?” With the information available, the answer is that he wasn’t. Of course, it’s possible that all of those methods are heavily commented or that some special documentation exists, but nevertheless, there is nothing intuitive about this interface or the temporal coupling that apparently comes with it.

In situations like the one I’ve described here, the learning curve is high. And, what’s more, the pride of having the tribal knowledge reinforces the steep learning curve as a good thing. After all, that tribal knowledge is what separates the wheat from the chaff, and it forces tenure and seniority to matter as much as technical acumen and problem-solving ability. What’s more, it ties knowledge of the problem domain to knowledge of the specific solution, meaning that knowing all of the weird quirks of the software is often conflated with understanding the business domain on which the software operates.

This is a problem for several reasons, by my way of thinking:

  1. A steep learning curve is not a good thing. Adding people to the project becomes difficult and the additions are more likely to break things.
  2. The fact that only a few chosen Keepers of the Tribal Knowledge understand how things work means that their absence would be devastating, should they opt to leave or be out of the office.
  3. The need to know an endless slew of tips and tricks in order to work on a code base means that the code base is unintuitive and difficult to maintain.  Things will degenerate as time goes on, even with heroes and tribal knowledge.
  4. When any question or idea raised by somebody newer to the project can be countered with “you just don’t know about all the factors,” new ideas tend to get short shrift, and the cycle of special tribal knowledge is reinforced that much further.
  5. Essentially, people are being rewarded for creating code that is difficult to understand and maintain.
  6. More philosophically, this tends to create a group dynamic where knowledge hoarding is encouraged and cooperation is minimized.

What’s the Alternative

What I’m proposing instead of the tribal knowledge system and its keepers is a software development group dynamic where you add a customer or stakeholder. Naturally, you have your user base and any assorted marketing or PM types, but the user you add is the “newbie.” You develop classes, controls, methods, routines, etc, all the while telling yourself that someone with limited knowledge of the technologies involved should be able to piece together how to use it. If someone like that wouldn’t be able to use it, you’re doing something wrong.

After all, for the most part, we don’t demand that our end users go through a convoluted sequence of steps to do something. It is our job and our life’s work to automate and simplify for our user base. So, why should your fellow programmers–also your users in a very real sense–have to deal with convoluted processes? Design your APIs and class interfaces with the same critical eye for ease of use that you do the end product for your end user.

One good way to do this is to use people new to the group as ‘testers.’ They haven’t had time to learn all of the quirks and warts of the software that you’ve lived with for months or years. So ask them to code up a simple addition to it and see where they get snagged and/or ask for help. Then, treat this like QC. When you identify something unintuitive to them, refactor until it is intuitive. Mind you, I’m not suggesting that you necessarily take design suggestions from them at that point any more than you would take them from a customer who has encountered sluggishness in the software and has ‘ideas’ for improvement. But you view their struggles as very real feedback and seek to improve.

Other helpful ways to combat the Keepers of the Tribal Knowledge Culture are as follows:

  1. Have a specific part of code reviews or even separate code reviews that focus on the usability of the public interfaces of code that is turned in.
  2. Avoid global state. Global state is probably the leading cause of weird work-arounds, temporal coupling, and general situations where you have to memorize a series of unintuitive rules to get things done.
  3. Unit tests! Forcing developers to use their own API/interface is a great way to prevent convoluted APIs and interfaces.
  4. Have a consistent design philosophy and some (unobtrusive) programming conventions.
  5. Enforce a pattern of using declarative and descriptive names for things, even if they are long or verbose. Glancing at a method, GetCustomerInvoicesFromDatabase() is a lot more informative than GtCInvDb(). Saving a few bytes with short member names hasn’t been helpful for twenty years.


Name Smells

I would imagine that most developers reading have heard of code smells. I’ve also seen various references to other concepts such as design smells (which are probably similar to code smells) or process smells. Generally, when you read these, you chuckle or nod in agreement, thinking that you’ve seen them before. For those not familiar with these concepts, the idea is there are certain characteristics of a code base that tend to indicate deeper problems. That is, the indicator may or may not be a problem in and of itself, but its presence usually correlates with code that is difficult to maintain, understand, or extend.

Today, I’m going to propose a concept called “name smells.” I fully acknowledge that these could simply be referred to as a subset of code smells, but I think they deserve their own terminology due to specificity. What I’m referring to is names of methods or classes that raise my hackles in Pavlovian response to repeated unpleasant associations. With code smells, you might see a lot of obviously copied and pasted code and think, “Uh oh.” With name smells, you don’t even need to inspect the code — you can just browse through the directory or project listing (for class names) or the object graph (for method names).

These smells are specifically for object-oriented development environments and are probably best applied in Java or C#. So, without further ado:

1. Utils in a Class Name

As someone who buys in fully to the OOP concept when working in OOP languages, this one makes me shudder. Because I know that what I’ll see if I open the class in my IDE is a static class chock full of a free-floating hodgepodge of methods with divergent responsibilities. This class will probably be long when you first encounter it. After every development phase, it will probably be even longer. Eventually, it may turn into two or more static classes, all ending in Utils. It might even make its way into different namespaces or assemblies.

The underlying problem that this smell is an indicator of is functionality not associated with any object. In OOP, this is a no-no. The existence of this class generally means that there is a substantial amount of functionality in your application that hasn’t been properly considered. The ramifications of this are that a great deal of “tribal knowledge” is required to write code. Consider something like:

Now, let’s say I’m new to this project, and I need to create a new kind of customer. Given the way the code there looks, it’s a safe bet that somebody is going to tell me not to inherit from Customer. So, I create NewCustomer, and I need to figure out how to set the customer number property. Without inspecting the source code, how on Earth do I do this?

I need to know of the existence of CustomerUtils and that it has some static method that gets me a customer number (via a law of demeter violation, no less). Now, that returns an int, but I need a string. So I also have to know that some other utils class is responsible for converting that int to the string that I want. Realistically, I won’t know this because it makes no sense. I have to go to one of the team members that wrote this stuff in the first place and pick their brains for tribal knowledge.

This is an extreme example perhaps, but it gets the point across. If there are static utils classes floating around, you have to know about the static classes and what their methods are to do things. If these static utils methods were pulled out of static purgatory and put into some instance object called “CustomerNumberGenerator,” that would make more sense to me. I’d be looking through the classes in the code base, wondering how to get a customer number, and I’d see a class that practically says, “ooh, ooh–instantiate me!” In the static utils world, I have to say, “CustomerUtils. Maybe if I dig through that I’ll find something. No, nothing… oh, wait. One of these methods returns something called a CustomerContext, maybe I should look at that.”

The irony is that Utils classes are supposed to be handy, but they bloat to such a degree that everyone will avoid digging into the code bowels if at all possible.

2. Helper in a Class Name

This is similar to the previous one, but with an important distinction. In the event that Helper is a static class, see the previous section because a Utils class by any other name smells just as badly. The distinction is that this is often an instance object, and I think of this as a different smell. It seems to indicate that the previous object was not up to its task and it needs ‘help.’ The class Tuna just isn’t that good on its own, so along comes instance class “TunaHelper” to make the original class more delicious.

I contend that the existence of TunaHelper or any other Helper is a name smell. Why do your classes need help? Being philosophical for a moment, we might say that all classes need help in an application unless the application is so tiny as to fit reasonably within a single class. That is, if you’re modeling an Engine, you could theoretically call the Car that contains it “EngineHelper.” If that’s what’s happening in the helper classes, the good news is that the smell is superficial and indicates only that the designer is bad at labeling things.

But if bad naming isn’t the issue, then you have a bona fide smell. What you probably have is poor class cohesion in your object model, meaning that things aren’t grouped in such a way that related functionality occurs together. If I have my Engine instance, and then I also have an “EngineHelper” instance, what we’re doing is essentially spreading the functionality of Engine across two different classes. If you have Foo and FooHelper in general, it’s pretty likely that anytime Foo needs to change, FooHelper also needs to change and vice versa. You have tight coupling resulting from your low degree of cohesion.

The code could probably benefit from merging Foo and FooHelper and, if the result is too large, critically examining how Foo might be broken down into more manageable components. But, generally speaking, anytime I see a FooHelper, be it an EngineHelper, a TunaHelper, or any other Helper, my immediate instinctive reaction is to think that the Helper descriptor is probably a piece of bitter irony and to cringe at the thought of opening the Helper and the Helpee to see what sort of ad-hoc maze of code exists within.

3. Manager in a Class Name

This has some degree of similarity to the Helper, though I’d say it’s more specific and that it indicates a subtly different problem. Again, this depends to some degree on the conventions of those doing the naming, but usually when I see “manager” it has a subtly different implication than “helper.” Many of the “managers” I’ve seen “manage” their charge by knowing an inappropriate amount of detail about its inner workings.

The result is fairly similar in that we get tight coupling, but whereas the helper probably exists to provide some different, if dubious, functionality, the manager usually exists as spillover from a class that is growing like a late 1990s baseball slugger. At around line 2000, the class author says, “Man, this is too big for a single class. I’ll refactor it into two classes: Foo and FooManager.” Then some particular division of labor exists where the manager is given various aggregation and collection responsibilities of the original class while the managee is given the leftovers. But since this is really a single class in the eyes of the creator, the now separated classes know all about the internal workings of one another.

The reason I say that this is a name smell is that you generally see the name “Manager” in a class as part of an effort to deodorize a code smell like “giant method” or “giant class.” The result is that you’re just trading one code smell for another–shuffling the deck chairs on the Titanic as it sinks, without stopping to address the root problem.

4. Initialize() Method

This is a name smell in a method. If I see this as a class method–especially a private class method–I know that I’m about to see something ugly. Generally, this method comes into existence when someone looks at their class constructor, realizes that it has grown to be 50 lines long, and thinks that’s ugly. Abstraction to the rescue! We’ll factor it out to a different method, bury it at the bottom of the file with the privates, and now our constructor is nice and clean again.

Except that it’s not. At least with the cluttered constructor the badness was advertised right up front instead of being tucked away somewhere. There’s no reason for constructors to be doing all sorts of stuff. That’s a code smell. Usually, it means that on instantiation, your object is designing, building, and installing the kitchen sink. It’s also a strong indicator of a God Object.

Objects with injected dependencies or simple initialization logic do not need an abstracted “Initialize()” method. So I maintain that if you see one in a clean constructor, things are about to get ugly:

5. Instance in a Method or Property Name

This is often a property instead of a method, but this name smell correlates heavily with the smell of singletons in your code. I might make a post in the future on singletons, so I won’t do it here. But I also might not, since the singleton issue has been pretty much beaten to death, with most recognizing them as a code smell. ¬†Those who don’t recognize them as such have probably heard all of the arguments and have not found them persuasive.

6. “Type” Pretty Much Anywhere (that isn’t reflection)

If you’re an OOP purist, this is usually an indicator of missed opportunities for polymorphism. When you see the word “type” in enums, methods, classes, etc., it usually means that the author is simulating polymorphism with its procedural equivalent.

That is, the author has given a class some property called “type,” probably assigned an enum to back it, and subsequently demanded that clients of his code switch over its “type” to know how to behave:

So, if you dig into a class and see that it has a “type” property or a method that mentions its “type,” and we’re not talking reflection, you’re probably going to see redundant case statements strewn about the application, representing the smell of duplication.

Final Thoughts

Everything here, is, of course, subjective. The notion of “smells” in the first place discusses something that is often an indicator of underlying problems. That means that you may full well have a “Manager” class or an “Initialize” method that is perfectly reasonable and is not an indicator of any problems. In addition, this is based on my own experience of looking at a variety of code bases. Over the course of time, I have noticed a high correlation between these words and code that is very procedural and/or difficult to maintain/modify/understand. Your mileage may vary.


New Ubuntu and Weird Old Sound Cards

In an earlier post, I described how one can go about installing a Belkin USB dongle and a very recent version of Ubuntu on a dinosaur PC. Tonight, I’m going to describe a similar thing, except that instead a new piece of hardware, it’s a dinosaur that came with the PC itself. I must admit, this is more likely to be useful to me the next time I encounter this on an old PC than it will be to anyone else, but, hey, you never know.

First, a bit of back story here. As I’ve alluded in previous posts, one of my main interests these days is developing a prototype affordable home automation system with the prototype being my house. So far, I have a server driving lights throughout the house. This can be accessed by any PC on the local network and by my Android phone and iPod. The thing I’m working on now is a scheme for playing music in any room from anywhere in the house. Clearly the main goal of this is to be able to scare my girlfriend by spontaneously playing music when I’m in the basement and she’s in the bedroom, but I think it’s also nice to be able to pull out your phone and queue up some music in the kitchen for while you’re making dinner.

Anyway, one of the cogs in this plan of mine is reappropriating old computers to serve as nodes for playback (the goal being affordability, as I’m not going to buy some kind of $3000 receiver and wire speakers all over the house). I should also mention that I’m using Gmote server for the time being, until I write an Android wrapper app for my web interface. So, for right now, the task is getting these computers onto the network and ready to act as servers for “play song” instructions.

The computers I have for this task are sort of strewn around my basement. They’re machines that are so old that they were simply given to me by various people because I expressed a willingness to thoroughly wipe the hard drive and I’m the only person most people know that’s interested in computers that shipped with Windows 98 and weigh in with dazzling amounts of RAM in the 64-256 megabyte range. These are the recipients of the aforementioned Ubuntu and Belkin dongles.

So, I’ve got these puppies up and humming along with the OS and the wireless networking, and I was feeling pretty good about the prospect of playing music. I setup Gmote, and everything was ready, so I brought my girlfriend in for my triumphant demonstration of playing music through my bedroom’s flatscreen TV, controlled purely by my phone. I plugged in the audio, queued up Gmote, and everything worked perfectly–except that there was no sound. My phone found the old computer, my old computer mounted the home automation server’s music directory (itself mounted on an external drive), Gmote server kicked in… heck, there was even some psychedelic old school graphic that accompanied the song that was playing on the VGA output to the flat screen. But, naturally, no sound.

So, I got out my screwdriver and poked around the internals of the old computer. I reasoned that the sound card must be fried, so I pried open another computer and extracted its card, and put everything back together, and viola! Sound was now functional (half a day later, thus taking a bit of the wind out of my grand unveiling’s sails). So, I pitched the sound card and moved onto getting the next PC functional. This PC had the same sound card, and I got the same result.

I smelled a rat, reasoning that it was unlikely that two largely unused sound cards were fried. After a bit of investigation, I discovered that the problem was the card in question was an ESS ES169, which is actually a plug and play ISA device and not a PCI device. I had reasoned the previous card was fried when I didn’t see it in BIOS PCI list. But, there it was in BIOS ISA list. Because, naturally, a sound card inside of the computer, like a printer or USB external hard drive, is a plug-and-play device.

But anyway, with that figured out, I was all set… kind of. It took me an hour or two of googling and experimenting to figure it out, but I got it. I had to experiment because this card was pretty dated even five years (or roughly the 438 version of Ubuntu) ago, and so I wasn’t dealing with the same utilities or configuration files.

So anyway, with the grand lead up now complete, here is the nitty gritty.

When you boot into Ubuntu, it, like me or any other sane entity, has no idea what this thing is. You’ll see nothing in lspci about it, of course, and if you sudo apt-get install the toolset that gives you lspnp, you’ll see it as an unknown device. Don’t let that get you down, though; it was, at some time, known to someone. The first thing to do is use sudo and your favorite text editor to modify /etc/modules. You’re going to add “snd-es18xx” to that file and save it.

Next, add the following text to the configuration file “/etc/modprobe.d/alsa-base.conf”:

And that’s that. Now, if you reboot you should see a working audio driver and all that goes with it. You can see that it’s working by playing a sound, or by opening up volume control and seeing that you no longer are looking at “Dummy Output” but a realio-trulio sound output.

I confess that I don’t know all of the exact details of what that configuration setup means, but I know enough mundane computer architecture to know that you’re more or less instructing this device on how to handle interrupts like the PCI device it’s pretending to be and Ubuntu thinks it ought to be.

I’d also say that this is by no means a high-performance proposition. I’d probably be better served to get a regular sound card, but there’s just something strangely satisfying about getting a twelve-year-old computer with a stripped down OS to chug along as well as someone’s new, high powered rig that’s been loaded down with unnecessaries. I suppose that’s just the hopeless techie in me.


Divide And Conquer

What Programmers Want

In my career, I’ve participated in projects that have run the gamut of degrees of collaboration. That is to say, I’ve written plenty of software on which I served as architect, designer, implementor, tester, and maintainer and I’ve also worked on projects where I was a cog in a much larger effort. I have been a lead, and I have been a junior developer. But, throughout all of these experiences, I have been an observer, thinking through what worked and what didn’t, what made people happy and what made them frustrated, and what could be done to improve matters.

I have found that I tend to be at my happiest when working by myself or as a lead, and I tend to be least happy when I’m working as a cog. When I first came to this realization, I chalked it up to me not being a “team player” and endeavored to fix this about myself, seeking out opportunities to put myself in the situation and find enjoyment in it. However, I came to realize that I had been subtly incorrect in my self-assessment. It wasn’t that I had an issue with not calling the shots on a project, but rather that I had an issue when there was no decision, however small, that was left up to my discretion–that is to say, if I was working in an environment where a manager, technical lead, or someone else wanted to sign off on anything I did, whether it was as large as submitting a rewrite proposal or as small as what I named local variables inside of my methods or what format I used for code comments.

This was subsequently born out by the experience of working in a collaborative environment where I was not in charge of major decisions, but I was in charge of and responsible for my particular module. I didn’t get to decide what my inputs or outputs would be, but I did get to decide internally how things would work and be designed. I was happy with this arrangement.

Speaking philosophically, I believe that this is important for anyone with a creative spirit and a sense of pride in the work that they do. It doesn’t matter whether the person doing the work is a seasoned professional or an intern. Being able to make decisions, if only small ones, creates a sense of ownership and pride and promotes creative expression. Being denied those decisions creates a sense of apathy about one’s work. In spite of their best intentions, people in the latter situation are going to be inclined to think “what do I care if this works — it wasn’t my idea.” I have experienced firsthand being asked to do something in ‘my’ code and thinking “this is going to fail, but hey, you’re in charge.” This sentiment only begins to occur when you’ve learned by experience that taking the initiative to fix or improve things will result in getting chewed out or being told not to do anything without asking. Someone in this situation will be motivated only by a desire not to get scolded by the micromanagers of the world.

Man Of The People: How to Satisfy Your Programmers

Over time, thinking on this subject has led me to some conclusions about optimal strategies for structuring teams or, at least, dividing up work. I firmly believe in giving each team member a sphere of responsibility or an area of ownership and letting them retain creative control over it until such time as it is proved detrimental. Within that person’s sphere of control, decisions are up to him. It’s certainly reasonable to make critical assessments of what he’s doing or request that he demonstrate the suitability of his approach, but I believe the best format here would be for reviewers or peers to present a perceived better way of doing things and allow the decision to rest with him.

To combat situations where failures may occur, it’s important to create a process where failures are exposed early on. I think the key here is a good division of labor on the development project which, not coincidentally, coincides with good software design practice. Break the effort into modules and appoint each person in charge of his or her own module. One of the first tasks is to define and prioritize the interfaces between the modules, the latter being done based on what is a prerequisite for what. So, if the group is building an expense reporting system, the person in charge of the data access layer should initially provide a small but functional interface so that the person writing the service layer can stub out mock objects and use them for development.

With interfaces defined up front, the project can adopt a practice of frequent integration and thus failures can be detected early. If it is known from the beginning that failure to live up to one’s commitments to others is the only vehicle by which creative control might be stripped via intervention, people will be motivated early by a sense of pride. If they aren’t, then they probably don’t take much pride in their work and would likely not be bothered by the cession of creative control that will follow. Either that, or they aren’t yet ready and will have to wait until the next project to try again. But, in any event, failure to live up to deadlines happens early and will not adversely affect the project. The point is that people are trusted to make good decisions in their spheres of control until they demonstrate that they can’t be trusted. They are free to be creative and prove the validity of their ideas.

If the project is sufficiently partitioned and decoupled with a good architecture, there won’t be those late-in-the-game integration moments where it is realized that module A is completely unworkable with module B. And, as such, there is no need for the role of “cynical veteran that rejects everything proposed by others as unworkable or potentially disastrous.” I think everyone reading can probably recall working with this individual, as there is no shortage of them in the commercial programming world.

I believe there are excellent long term benefits to be had with this strategy for departments or groups. Implemented successfully, it lends itself well to good software design process, though I wouldn’t say it causes it, since a good architecture and integration process is actually a requirement for this to work. Over the long haul, it will allow people hired for their credentials to provide the organization with the benefits of their creativity and ingenuity. And, part and parcel with this is that it is likely to create productive and happy team members who feel a sense of responsibility for the work that they’ve created.

When All Doesn’t Go According to Plan

Of course, there is a potential down side to all of this — it allows for another role with which we’re all probably familiar: “anti-social guy who thinks he’s valuable because he’s a genius but in reality writes such incomprehensible code that no one will so much as glance at it for fear of getting a headache.” Or, perhaps less cynically, you might have the master of the oddball solution. Given creative control of their own projects, these archetypes might create a sustainability mess that they can (or maybe can’t) maintain on their own, but if someone else needs to maintain it, it’s all over.

The solution I propose to this is not to say, “Well, we’ll give people creative control unless we really don’t like the looks of what they’re doing.” That idea is a slippery slope toward micromanaging. After all, one person’s mess may be another’s readable perfection. I think the solution is (1) to make things as objective as possible; and (2) to allow retention of a choice no matter what. I believe this can be accomplished with a scheme wherein some (respected) third party is brought in to examine the code of different people on the team. The amount of time it takes for this person to understand the gist of what the code is doing is recorded. If this takes longer than a certain benchmark, then the author of the difficult-to-understand code is offered a choice – refactor toward easier understanding or spend extra time exhaustively documenting the design.

In this manner, we’re enforcing readability and maintainability while still offering everyone creative control. If you want to write weird code, that’s fine as long as it doesn’t affect anyone else and you go the extra mile to make it understandable. It’s your choice.

Being Flexible

We can, at this point, dream up all manner of different scenarios to try to poke holes in what’s here so far. However, I argue that we’ve established a good heuristic for how to handle them – objective arbitration and choice. Whatever we do, we find a way to allow people to retain creative control over their work and have this justified through objective standards. I hypothesize that the dividends on productivity and team buy-in will counteract, in spades, any difficulties that arise from occasional lack of homogeneity and learning curve in maintenance. And I think these difficulties will be minor anyway, since the nature of the process mandates a decoupled interface with clearly defined specs coming first.


Book Review: Effective C#

I just finished reading Effective C#: 50 Specific Ways to Improve Your C# by Bill Wagner, and thought I’d cover it a bit here. On the whole, I thought that this was an excellent book. It’s quite helpful and interesting, and it provides a nice counterpoint to many technical books in that you have immediate results and feedback by reading a given section. In other words, many technical books tend to be a journey that improves your situation holistically when you’ve finished it whereas this book has a bunch of helpful information that is generally autonomous. You can flip to item 20, read about it, and experience a positive result.

So, here is my take, in itemized fashion:

The Good/Don’t Miss

  • Always provide ToString(): An excellent explanation of how the framework handles the ubiquitous casts of objects to strings.
  • Understand the Relationships Among the Many Concepts of Equality: Wagner does a great job of breaking down the (confusing) way C# handles various notions of equality.
  • Prefer Query Syntax To Loops: Get on board with declarative syntax and small, scalable methods
  • Understand the Attraction of Small Functions: Excellent ammunition for arguments with people who say that it’s good to have giant, C-style methods for the sake of efficiency — don’t miss this section!
  • Express Callbacks with Delegates: A nice explanation of the concept of delegates and why they’re useful.
  • Avoid Returning References to Internal Class Objects: Some of the choices made by the C# language authors make this tough, but Wagner provides some elegant ways to preserve your object’s encapsulation of its internals.
  • Avoid ICloneable: Glad that he points this out unequivocally. Not every language concept turns out to be advisable.
  • The entire Dynamic Types section: Might be a little fast-paced as an introduction, but if you haven’t seen this new feature of C# as of V4.0, this is worth reading and making sure to understand, even if it takes a few reads.

The Questionable (in my opinion)

There was little to find objectionable in this book, but the main quibble that I did have was with “Item 25: Implement the Event Pattern for Notifications.” This example features a how-to of using events, which I think should be used perhaps more sparingly than others do, but I didn’t find this questionable in and of itself. What bothered me was that the example of an event source was some kind of global logging singleton.

To me, the use of a singleton on its own is undesirable, much less one that provides hooks for events. A singleton that fires events is a memory leak waiting to happen (because event listeners are withheld from garbage collection by the event source for the event source’s lifetime unless explicitly unhooked). The whole concept of coding up some singleton (global variable) that fires events makes me extremely leery, as you’re providing two layers of indirection with hidden dependencies: (1) singletons couple your classes with hidden dependencies just by existing and being used; and (2) events are non-obvious sources of dependency, with or without global variables.

I understand that, as an example, this is easy to wrap your head around, but people tend to copy such examples and work them into what they do, and I sure wouldn’t want to open up some code I was tasked with maintaining to find that thing.

The Rather Obvious

There’s nothing wrong with pointing out the obvious (someone needs to for the sake of beginners), so don’t take this as a knock on the contents. I’m just mentioning these here in case you’re already familiar with good OOP design practice, as you might want to skim/skip these sections.

  • Use Properties instead of Accessible Data Members
  • Minimize Duplicate Initialization Logic
  • Limit Visibility of your Types

Who Should Read This

Really, I’d say that anyone who codes in C# should give this a read. Whether you’re new to the language or an old pro, it’s almost certain that you’ll find something new and helpful in here. It’s up to date with the bleeding edge of language idioms and it addresses some things that have been around a while. It also provides a lot of history within the language and context for things (rather than just instructing you to “do this” or “avoid that”), so it is quite approachable for a range of experience levels.