DaedTech

Stories about Software

By

Stop Online Piracy Act (SOPA): History Repetition for the Doomed

What is SOPA?

For anyone not familiar with the Stop Online Piracy (SOPA) act, a brief primer on it can be found on wikipedia. It is an act being considered by the United States Congress that would allow websites accused (not found guilty of, but accused) of engaging in piracy, promoting piracy, or simply creating an environment where piracy could theoretically take place from being blacklisted in DNS. For those not familiar with DNS, it is the ubiquitous service that takes a 12 number IP address and resolves it to a pretty URL like http://www.daedtech.com. Under SOPA, if someone accused daedtech of supporting piracy, typing the URL into your browser would stop working, searching for it on google would stop working, and pretty much everything about the site would stop working. Daedtech would be guilty until proven innocent.

The bill is driven by intentions that are reasonable (but obsolete, as I’ll argue later) — to protect intellectual property. It’s aimed at stopping someone from bootlegging movies or music and setting up a website to share the plunder with legions of would-be pirates. But the approach that they’re taking is a bit like cutting off your leg to address a splinter in your pinky toe. I say this because “guilty until proven innocent” tends to result in paradigms more like the Salem Witch Trials and McCarthyism than it does in reasoned, judicious use of preventative force. In the current system, people are on their honor not to be criminals, which is how most of society works. With a “guilty until proven innocent” system, the system is on its honor not to abuse its power, which is how scenarios like that in 1984 get started. I submit that the only thing less trustworthy than John Q Public to be on his honor is The Prince being trusted on his honor. But, I digress.

The end result is that the internet as a boundless repository for all manner of ideas, big and weird, all forms of collaboration, and all kinds of expression, from the disgusting to the divine, would be fundamentally altered, at least in the short term. It would cease to be a sometimes scary, under-policed realm and would, instead, become a “walled garden”. And the walls would be manned by corporations like the RIAA, MPAA, large software companies, and generally speaking, armies and armies of lawyers. Nevermind that none of these organizations save the software companies are particularly expert in the workings of the internet or technology – that isn’t the point. The point is to make sure that the internet is a place for free expression… so long as that expression doesn’t interfere with certain revenue streams. This is entirely rational (I sympathize with any entity protecting its own bottom line) on the part of the would-be police, but to call it a gutshot to the spirit of the internet would be an understatement.

How did it Come to This?

I’ll get to that in a moment, but first, let’s consider a term that most people have probably heard once or twice: Luddite. In most contexts, people use this to describe those afraid of and/or suspicious of new technology. So, it may describe a friend of yours that refuses to buy a cell phone and bemoans that people constantly use them. Or, it may describe a relative who has been writing letters all his life and isn’t about to stop now for some kind of “email fad”. But, the term Luddite arises from a very real historic scenario, with a subtlety different and richer message.

In the late 1700’s and early 1800’s the French Revolution and ensuing Napoleonic Wars set the stage for rough economic times by virtue of Europe-wide destruction and devastation. Add to this an Industrial Revolution in full swing, replacing individual artisans with machines, and you had a perfect storm for unhappy laborers. A group of increasingly displaced, unskilled textile workers weren’t going to take their obsolescence lying down – they organized into a paramilitary group and executed a sustained, well organized temper tantrum in which they destroyed machines like looms and mills that were replacing and obviating their manual labor.

This went on for a few years, and has the kind of ending that only a Bond villain could love. The British government smashed the movement, executed key participants, and progress marched on, as it inevitably does. This is indeed a story with no real hero. Instead of adapting or undertaking a campaign of ideas, a group of people attempted futilely to “stop the motor of the world”, and, in response, another group of people broke their spines. The theme of 2 potentially sympathetic interests acting in a way to render their plights completely unsympathetic via collateral damage is worth noting.

Ludditism Through the Years

The Luddites gave a name to an age old phenomenon (albeit one that is seen more often as the pace of technological advancement increases): man fighting in vain against his own obsolescence. Here’s an interesting look at the way that this can affect just one particular industry:

  1. Late 1800’s: Composers worry that the phonograph, which allows songs to be recorded in short bursts, will obsolete the musical performance and the performance of long elaborate symphony
  2. Early 1900’s: Record (phonograph “cylinder” producers) worry that the advent of radio will destroy their livelihood
  3. Mid 1900’s: Radio industry worries that the advent of television will destroy their business
  4. 1980’s: Record labels worry that the tape, which allows home recording, will devastate their business
  5. 1990’s: Record labels worry that the CD burner will devastate their business
  6. 2000’s: Record labels worry that the internet and streaming will devastate their business

Now for the most part, these concerns and others like them were “Luddite” in the common, vernacular, anti-technology sense, rather than in the scorched Earth sense of the actual Luddite movement. The cycle is predictable and repeatable – new thing comes out, current top dog is worried. Sometimes current top dog adjusts and adapts (radio), and some times he dies out (phonograph, cassette tape). But whatever happens to top dog, progress marches inexorably onward.

Adapting By Anachronism

In the list above, you’ll notice a constant over the last 30 years – the RIAA is afraid of and upset about everything that happens. Previously, the cycle involved some new medium replacing or gaining foothold beside an existing one. And, that continued through the entire list, but the only outlier is that the same entity is now afraid of everything that happens. Why is this…?

I would argue that it’s because record labels themselves are anachronistic throwbacks that exist only because of previous power and influence. They are exclusively in the business of keeping themselves in business via influence peddling and manipulation, providing no relevant product or service. Let’s consider the history of music. For basically all of recorded history until very recently, musicians were exclusively performance artists, like jugglers or an acting troupe. They sold services, not consumable goods. Wandering minstrels, court composers, and random people with good voices played for whoever, royalty, and friends/family, respectively. Musicians occupied a place in society that ranged from plebian to whatever equivalent for “professional” or “middle class” existed at that place in history.

This all changed when the invention of recorded music turned music into a commodity good rather than a service. At first, the change was subtle, but as music changed from something people had to make for themselves or their neighbors to a specialized good, a dramatic new paradigm emerged. Companies realized that they could leverage economies of scale to bring the same music into every living room in America or even the world, and they subsequently realized the star power that came with doing such a thing. Court Composers like Mozart had nothing on Elvis Presley. The upper crust had to go to see the famous Mozart perform, if they could get a ticket, but anyone with a dollar could enjoy The King (title very appropos here). Record labels went from selling vinyl discs with music on them to selling dreams, and musicians went from travelling charlatans, court employees, or volunteer music lovers to Kings (Elvis Presley) and Gods (Beatles being “more popular than Jesus”) overnight, in a historical context.

With this paradigm shift, the makers of music had unprecedented power and influence via money and widespread message, and the facilitators of this, record labels, had control over that message and influence. It was the golden age of music as a commodity, where its participants were kings and kingmakers, and the audience served as their subjects. All of this was made possible by economies of scale. Not just anyone had the ability to record music and distribute it nationally or internationally. That required expensive equipment, connections, and marketing prowess. The kings were beholden to the kingmakers to anoint them and grant them stardom. The subjects were beholden to the kingmakers to bring them their kings, what with the everyman musician being a casualty to this form of specialized labor (meaning the days where someone in every family could play a mean piano were now obsolete since music could be purchased for a buck an album).

But then a funny thing started to happen. The everyman musician started to make a comeback, not only for the love of the music, but also for the lottery-dream of being anointed as a king. Generations of children grew up listening to Elvis, The Beatles, Led Zeppelin, Michael Jackson, etc, and wanting the lifestyle of untold riches and adulation. Home music making equipment proliferated and improved and rendered the studio a luxury rather than a necessity. The internet exploded onto the scene and suddenly distribution channels were irrelevant. Sites like youtube and twitter emerged in “web 2.0” and now even marketing was available to the masses. Every service provided by the kingmakers when they were making kings was now available to the unswashed, plebian masses. Music is very much coming full circle, back to the point where heavily marketed, auto-tuned pop stars who play no instrument are unnecessary and the average person, with a computer, an instrument or his or her vocals, is perfectly capable of producing and sharing music. And, that situation is back with a vengeance since that person is now also capable of sharing that music with the world. Untold riches, record executives, destroyed hotel rooms and temper tantrums aren’t necessary – just an internet connection and an idea.

The only thing that the kingmakers have left as this point is the accumulated money, power and influence from bygone glory days. They are obsolete. And, like anyone concerned about his own obsolescence, they have entered the camps of the Luddites. At first, it was the vernacular ludditism – whining vociferously about new technology dooming them, spreading FUD about the technology, etc. But now, to tie it all back to SOPA, they are taking the step of being actual, Scorched Earth Luddites. SOPA isn’t actually about stopping piracy. It’s about destroying the thing that has removed their authority to make kings – the thing that has revealed them as unnecessary, out of touch emperors wearing no clothes. SOPA isn’t about fairness – it’s about breaking into the metaphorical mill and taking a sledge hammer to the metaphorical loom. It’s about destroying the thing that is replacing them: the internet.

The music industry is perhaps the most obvious example, but this applies other places as well, in that the internet’s very distributed, collaborative nature has created a cabal of outfits that wish the internet would go away. Book publishers (again, distribution economies of scale), older gaming companies, the movie industry, etc are all in relatively similar boats where instant, massive collaboration is replacing their purpose for existing.

Vindictive and Scary, but Doomed to Failure

There is one key difference with today’s Luddites versus the original ones. The original ones were opposed by their government and eventually smashed by it. Today’s Luddites are using lobbyist influence in an attempt to purchase the government and use it to execute the destruction of the looms. Instead of oppositional forces, the government and the Luddites are threatening to team up to unleash destruction. But in the end, it doesn’t matter. Government or not, Luddites or not, progress marches onward inexorably.

We as techies and most people as the consuming republic will gnash our teeth and (rightly so) do everything we can to prevent the destruction of the Luddites, but we may ultimately fail to stop them in a specific equipment smashing. But, what they can’t smash is the fact that the technology has been discovered and used and can be re-created, modified, adapted and perfected. At some point, their sledgehammers will dull, their resolve will weaken, and they’ll be relegated to the dustbin of history.

By

GitHub and Easy Moq

GitHub

Not too long ago, I signed up for GitHub. I had always used subversion for source control when given the choice, but the distributed nature of Git appealed to me, particularly since I often do work on a variety of different machines. GitHub itself appealed to me because it seemed like the perfect venue for reusable code that I’ve created over the years and tend to bring with me from project to project. Here is a unique idea – sites like Sourceforge want to store code by basic unit of “project” whereas GitHub wants code for the sake of code. This code need not be finished or polished or distributed on any kind of release schedule. That is perfect for a lot of my portable code.

One thing that I noticed about GitHub upon arrival was the lack of .NET work there, compared with other technologies. There is some, but not a lot. Presumably this is because sites like Code Project already offer this kind of thing to the .NET community. After reading about Phil Haack leaving Microsoft to be GitHub’s Windows Ambassador, I decided that this was a good indicator that GitHub was serious about growing that portion of its user base, and that I’d start porting some of my code there.

Another thing that I’d read about git before trying it out was that it has a steep learning curve. To back my general impression of this as a widely held opinion, I did a quick google search and ran across this post.

Given the steep learning curve many (including myself) experience with Git, I think I can safely assume that many other people have this sort of problem with it.

So, there’s at least one person who has experienced a steep learning curve and at least one other person with the same general sense as me about many finding the learning curve steep.

I must say, I haven’t found this to be the case. Perhaps it’s my comfort with Linux command line interaction or perhaps it’s my years of experience using and administering SVN, but whatever it is, I read the GitHub primer and was up and running. And, I don’t mean that I was up and running with commits to my local repository, but I was up and running committing, pushing to GitHub, and pulling that source to a second computer where I was able to build and run it. Granted, I haven’t forked/branched/merged yet, but after a day of using it, I was at the point where the source control system retreated into the mental background where it belongs, serving as a tool I’m taking for granted rather than a mental obstacle (by contrast, even with a couple of years of experience with Rational Clear Case, I still have to set aside extra development time to get it to do what I want, but I digress, since I don’t believe that’s a simple case of needling to mount a learning curve). So, after a day, I’m developing according to my schedule and running a commit when I feel like taking a break and/or a push when I want to update GitHub’s version. I’m sure I’ll be hitting google or StackOverflow here and there, but so far so good.

Easy Moq

So, what am I doing with my first GitHub project? It’s something that I’m calling Easy Moq. I use Moq as my main mocking framework, as I’ve mentioned before, and I’ve had enough time with it to find that I have a certain set of standard setup patterns. Over the course of time, I’ve refined these to the point where they become oft-repeated boilerplate, and I’ve abstracted this boilerplate into various methods and other utilities that I use. Up until now, this has varied a bit as I go, and I email it to myself or upload the classes to my web server or something like that.

But, I’m tired of this hodgepodge approach, and this seems like the perfect use case for trying out GitHub. So, I’m looking over my various projects and my use of Moq and creating my own personal Easy Moq library. If others find it useful, so much the better – please feel free to keep up with it, fork it, add to it, etc.

At the time of writing, I have three goals in mind (these may change as I TDD my way through and start eating my own dog food on real projects). In no particular order:

  • Create Mock inheritors that default to having specific test double behavior – dummies, stubs, spies, etc.
  • Be able to create and use my doubles with the semantics Dummy() or new Spy() rather than new Mock().Object
  • Be able to generate an actual class under test with all dependencies mocked as Dummy, Stub, Spy, etc.

For the first goal, I’ll refer to Niraj’s take on the taxonomy of test doubles. I haven’t yet decided exactly how I’ll define the particular doubles, but the ideas in that post are good background for my aims. Up to this point, if I want to create one of these things with Moq, I instantiate the double in the same fashion and my setup differs. I’d like to make the declaration semantics more expressive — declaring a DummyMock is more expressive as to your intent with the test than creating a Mock and doing nothing else to set it up. For more complicated doubles, this also has the pleasant effect of eliminating manual boilerplate like “SetupAllProperties()”.

For the second goal, this is probably lowest priority, but I’d like to be able to think of the doubles as entities unto themselves rather than wrappers that contain the entity that I want. It may seem like a small thing, but I’m a bit of a stickler (fanatic at times) for clear, declarative semantics.

For the third part, it’d be nice to add a dependency to a class and not have to go back and slaughter test classes with “Find and Replace” to change all calls to new Foo(bar) to new Foo(bar, baz), along with Baz’s setup overhead, particularly when those old tests, by definition, do not care about Baz. I realize that there is already a tool designed to do this (called AutoMoq), but it appears to be for use specifically with Unity. And besides, this is more about me organizing and standardizing my own existing code as well as getting to know GitHub.

So, that’s my first pet project contribution to Git Hub. If anyone gets any mileage out of it, Cheers. :)

By

DaedTech on Social Media

DaedTech is now on…

Twitter

@daedtech

I’ve never really had much interest in Twitter as a personal tool, but in the context of keeping and maintaining a blog, it seems to make sense. So, I’ll use it in conjunction with this blog as a “micro-blog” tool, as well as a vehicle for announcing site posts.

Google Plus

Here is the link to the google plus page: DaedTech.

Facebook

And, finally, the link to facebook: DaedTech.

By

Making The Bad Impossible

Don’t Generate Compiler Warnings

I was doing some work this morning when I received a general email to a group of developers for a large project that sort of pleaded with and sort of demanded that members of the group not check in code that generated compiler warnings. And, I agree with that. Although, with the way that particular project is structured, this is easier said than done in some cases. There are dozens and dozens of assemblies in it, not all of which are re-compiled with a build, so someone may generate a warning, not notice it, and then not see it in subsequent runs if that assembly isn’t recompiled. Doing a rebuild instead of a build as habit is impractical here.

Something bothered me about this request and I couldn’t put my finger on it. I mean, the request is reasonable and correct. But, the email seemed sort of inappropriate in the way that talking about one’s salary or personal politics seems sort of inappropriate. It made me uncomfortable. The “why” of this kicked in after a few minutes. The email made me uncomfortable because it should have no need to exist. No email like that should ever be sent.

Don’t Do X

Some time back, I blogged about tribal knowledge, and why I view it as a bad thing. Tribal knowledge is a series of things that accrue over time in a project or group that are not manifest and can only be learned and memorized by experienced team members:

You ask for a code review, and a department hero takes one look at your code and freaks out at you. “You can never call MakeABar before MakeABaz!!! If you do that, the application will crash, the computer will probably turn off, and you might just flood the Eastern Seaboard!”

Dully alarmed, you make a note of this and underline it, vowing never to create Bars before Bazes. You’ve been humbled, and you don’t know why. Thank goodness the Keeper of The Tribal Knowledge was there to prevent a disaster. Maybe someday you’ll be the Keeper of The Tribal Knowledge.

We’ve all experienced things like this when it comes to code. Sometimes, they’re particulars of the application in question, such as in the tribal knowledge examples. Sometimes, they’re matters of organizational or other aesthetics, such as alphabetizing variable declarations, or maybe some other scheme. Sometimes, they’re coding conventions or standards in general. But, they’re always things that a person or group decide upon and then enforce by asking, telling, demanding, cajoling, threatening, begging, shaming, etc.

As a project or collaborative effort in general proceeds, the number of these items tends to increase, and the attendant maintenance effort increases along with it. One rule is simple and easy to remember. When the number gets up to five, people sometimes start to forget. As it balloons from there, team members become increasingly incapable of using a mental checklist and the checklist finds its way to a spreadsheet, or, perhaps a wall:

When this happens, the time spent conforming to it starts to add up. There is the development time, and then the post-development checklist validation time. Now, theoretically, a few times through the checklist and developers start tending to write code that conforms to it, so the lost time is, perhaps, somewhat mitigated, but the check itself is still more or less required. If someone gets very good at compliance, but makes one typo or careless error, a public shaming will occur and re-up the amount of time being spent on The List.

And, The List probably grows and changes over time, so no one is ever really comfortable with it. The List isn’t just tribal knowledge – it’s “meta-tribal knowledge”.

Stop the Madness

Is The List really necessary? People who enjoy dealing with the DMV or other bureaucracies may think so, and most developers may resign themselves to this sort of thing as an occupational hazard, but why are we doing this? Why do we have to remember to do a whole bunch of things and not do a whole bunch of other things. We’re programmers – can’t we automate this?!?

I believe that the answer to that question is “absolutely, and you should.” To circle back to my introductory example, rather than send out emails about checking for compiler warnings, why not set the build to treat warnings as errors? Then, nobody has to remember anything – generating a compiler warning will impede immediate progress and thus immediately be rectified. No emails, no post-it, and no entry in The List.

Alphabetized variables? Write a script on the build machine that parses your checked in files. Casing conventions? Write another parsing script or use something like DevExpress to automate it. Dependency management? Organize code into assemblies/libraries that make it impossible for developers to include the wrong namespaces or use the wrong classes.

And that’s the meat of the issue. Don’t tell people not to do bad things – make it impossible via immediate feedback. If I’m expected not to generate compiler warnings, make it so that I get the feedback of not being able to run my tests or application until I clean up my mess. That sort of scheme retains the learning and feedback paradigm, but without being a drain on productivity or the need for others to be involved. Constructor injection is based on this principle, when you really think about it. Using that injection scheme, I don’t need to wonder what I have to do before instantiating a Foo – the compiler tells me by not building if I don’t give Foo what it wants. I don’t have to go ask anyone. I don’t have to find out at a code review. I can just see it for myself and learn on my own.

Obviously, it’s not possible to make all bad things impossible. If you can do that, you probably have better things to do than code, like running the world. But, I would advocate that we go after the low-hanging fruit. This is really just an incarnation of the fail early concept, but that doesn’t just describe code behavior at runtime. It also can describe the feedback loop for development standards and preferences. Static analysis tools, auto-checkers, rules engines, etc are our friends. They automate code checking so that it need not be done by asking, telling, demanding, cajoling, threatening, begging, shaming, etc.

By

What’s In A Name? Probably the Quality of Your Code.

Name Smells Revisited

A long time ago, I blogged about a concept I dubbed “name smells”. Among other things that I described making me wary of a piece of code just by its name were the words “Helper” and “Manager” in a class name. My issue with this was and is that this tends to be indicative of a class badly coupled to whatever it’s “managing” or “helping”.

Today, I was adding some code to a large, long-standing code base, and I ran the Microsoft code metrics on the large assembly to which I was adding it. I then went looking for my class to see its metrics, for reporting purposes in an implementation document I was constructing. Out of curiosity, I decided to sort the classes by Maintainability Index to see how mine stacked up. I was pleased to see that it had a decent enough index of 88 and was relatively near the top.

From there, I became curious to see which classes were near the bottom, so I scrolled up, since they were in ascending order. As I got up into the 60’s, 50’s, and 40’s, I noticed an unmistakable trend. The worse the metric level, the more frequently I was seeing classes with names like “FooHelper”, “BarManager”, “BazHandler” and “ReBarController” (not to be confused with Model-View-Controller controllers, these ‘controllers’ were not implementing a pattern of any kind). When I scrolled back to the 70+ range, there were exactly 0 classes with names like these. My gut take had been somewhat empirically reinforced, if not outright validated.

Blame the Name?

So, what gives? Does Microsoft look for keywords like those and dock them 20 maintainability points or something? I opened a few of them up for a look and beheld gigantic methods, lots of state flags, and all of the things you might expect in a class with a poor maintainability score. So, Microsoft does not, apparently, share my intrinsic bias against classes with names like these. It must be something else.

Now, obviously I think there’s a correlation between hard-to-maintain classes and these names, or I wouldn’t have blogged about it. But still, you’d think in a pretty large assembly, there would have been one that bucked the trend. The exception to prove the rule….? But no, not a single one.

I spent a bit of time pondering this, glancing back at my old post. When I wrote that, I was thinking that generally what you see is some class evolving to gargantuan proportions, at which time the developer plays Solomon and cleaves the class awkwardly in two with one half of the class being the “Manager” and the other the “Managee”. (I admittedly don’t know exactly how these things get released into the wild, since I don’t do it, myself). But, I don’t think that’s always the case. I think that maybe, sometimes, they start out innocently.

If they start out the result of Solomon the Developer, they’re hosed from the beginning. But, if they start out simply, they must rot. I figure that some of the ones I was looking at must have started out innocently, as they seemed to have the air of a small, functional house with lots of haphazard additions built by someone’s pitied brother in law the contractor. So, what is it that dooms them all, regardless of origin or conception?

Yes! Blame the Name!

What they all have in common is their names. Generally, it’s common practice to give classes noun names and methods the name of noun-verb pairs (with optional modifiers). So, you have a “Customer” class with a “string GetFullName()” method. Service oriented classes tend to have noun names that are or contain deverbal nouns, such as CustomerValidityEvaluator or EventAggregator…. or, CustomerHelper. We’ll get back to this in a moment.

Humans are habitual organizers, and programmers tend to be moreso. We like to categorize, arrange, place in buckets, and otherwise assign hierarchy. But, we’re also naturally lazy. If we have a framework in place for organization, we’re happy enough to behave incrementally. You get and keep this:

If, on the other hand, no framework exists, the Broken Windows Theory goes into effect, and you get and keep this:

Now, whether people pay lip service to Big Up Front Design (BUFD) or not, most people actually practice some kind of emergent design in their code or another. You don’t necessarily know exactly what every class is going to do when you start, so some exploratory coding is probably done on the side, even in the most Waterfall-committed shop. At some point, each class starts with a (relatively) blank slate, and there is some uncertainty as to what this class is going to do, exactly. TDD lets you drive this with correct functionality as a client. BUFD dictates to you everything that it should do, but you still deviate because BUFD is about as practical as an ice cream grill. No process at all renders the slate totally blank.

As the coding phase wears on, more functionality will be added to this class, and it might stay neat and organized, or it might start to look increasingly like the second image. And, I submit that a big determining factor in whether or not that happens is the name of the class. A class with a name like “CustomerNameTruncator” is likely to repel random functionality like a rain-exed windshield because people are hesitant to do willfully wrong things in code. If you’re weighing your options as to where to place a method that reads a customer table in the database, you’re probably going to avoid CustomerNameTruncator because if anyone sees your name on that, they’ll think that you’re daft.

Well, you don’t want that, so you get clever. You ‘refactor’ the “CustomerNameTruncator” to be called “CustomerHelper” and throw your database method in there. Problem solved! Nobody can tell you that it doesn’t belong there because your method is certainly helpful. And now, we have a design pattern. Anything that might be helpful to customers goes in here. The class can truncate the customer’s name, and it can read the customer table, and, even send emails to customers informing them about new products that they might enjoy. I mean, why not? That’s helpful, and with the database method in there, it already knows their email addresses. Heck, it might be even more helpful if it read the new offerings from the offerings table, too. This class just keeps getting more and more helpful.

And, as it gets more and more helpful, its maintainability index plummets, and it rots. It does everything under the sun, and it gets coupled to more and more unrelated, unfocused things. The person that changes its name (or names it that from the get-go) has broken the first window in the building, and every other developer that stops by thinks it might be sinfully satisfying to toss a rock through another window.

That is the problem with this naming scheme. It’s so vague that whatever functionality you add to it pretty much always belongs there, at least according to its name. Good OOP does not allow this practice in your design, but the name begs to differ. It pours you a stiff drink and invites you to throw your dirty clothes on the floor and worry about it later.

As an exercise, if you see a class named in this fashion, open it up and try to describe its behavior in a simple sentence. I don’t even need to open “CustomerNameTruncator” to do that. But, how do I describe “CustomerHelper”? I bet it’s a run-on sentence with a lot of “ands”, and probably a number of “ifs” and “buts” as well, for good measure. When you’re describing a class’s function and you’re saying “and” a lot, there’s a good chance that class has too many responsibilities.

Well, The Windows are Already Broken, so Whatcha Gonna Do?

The fact that you’ve got Helpers and Managers liberally sprinkled throughout your code base is not an unsolvable problem. In the formulation of the Broken Window Theory, the important realization was that it was time to fix all of the windows and keep them fixed. Same thing applies here.

The very first step is to change the name of those classes. Don’t put it off because you don’t have time for a refactoring or be intimidated. Just change the name. If it has a bunch of responsibilities, pick one. Turn “CustomerHelper” back into “CustomerNameTruncator” and let the database and email methods look ridiculous. Your path to refactoring will be obvious.

Also, realize that it always looked ridiculous to the eye of a critical OO designer. Naming it helper won’t fool anyone who actually looks at the class, or any objective calculator of maintainability. The fact that you can argue that the class’s methods don’t disagree with its name doesn’t alter another fact — the class is a hulking, 4000-line behemoth with no clear single purpose. You don’t get points for coming up with a name so vague as to be meaningless, anymore than someone would for putting all code in a giant method called “DoWhatMustBeDone()”.

And then, take the experience with you and pass it on. It isn’t just “Helpers” and “Managers” that are the problem – it’s any class or method with a name vague enough that it could do anything imaginable and still be considered logically consistent. Keep an eye out for that in your code or in that of others and steer empty rooms towards being well organized as they fill up, rather than messy.