DaedTech

Stories about Software

By

Is Programming Art?

Don’t look now, but I’m pretty sure this makes three weeks in a row of doing at least 1 reader question per week.  Today, I’m going to tackle a question I received: is programming art?  Here is the actual text of the question.  It’s a well-written consideration that cites two parallels: painting and writing.

This article asks if computer [scientists] apply scientific methods, but perhaps we should reconsider the premise, is computer science even a science at all? I contend that a software engineer has more in common with an artist than a physicist. If a painter applied scientific principals and determined that the most pleasing color is purple and the most pleasing subject matter is a tulip, then all artists would paint nothing but purple tulips, which would not be pleasing at all.

Lets compare the developer to another type of artist, the writer. Whether you’re writing the great American novel or assembly instructions for a book shelf, or anything in between, you must consider questions like what tone of voice should I use, how formal should it be, how long should each chapter be, etc. There is never a single, scientific answer to those questions any more that there is to questions like how long should a method be, how descriptive does a local variable name need to be, etc.

To answer these questions, any writer or developer must consider the target audience. The great American novel will be written in a much different tone than a book to teach children how to read. The audience for your code is not the user (that’s the audience for the UI). The audience is the CPU, but much more so, its the next developer who needs to edit your code, or even a future you.

Bear in mind that this was written against the backdrop of this post that I wrote back in the fall, in which I answered a reader question about whether what programmers do is scientific (as in the “computer science” that we learn).  Thus the sentiment here seems to be, “I think maybe what we do is better compared to art than experimental science.”

Fair enough.

What Is Art?

But before going further, I’d like to level set a bit with the actual definition of art.  If I go the classic, find a dictionary definition route, I get an adorably Lieutenant Data-like answer.

1. the quality, production, expression, or realm, according to aesthetic principles, of what is beautiful, appealing, or of more than ordinary significance.
2. the class of objects subject to aesthetic criteria; works of art collectively, as paintings, sculptures, or drawings: a museum of art;
an art collection.

If I go look for any number of essays on the topic, two main themes emerge.  The first is some rallying around the “it’s hard to define, but maybe it’s like the obscenity case in that I know it when I see it.”  I think this line of reasoning is intended as inoculation against the philistine in the art museum saying, “throwing paint at a canvas ain’t art — I coulda done that!”  That one isn’t particularly interesting for our purposes here, which brings me to the second theme: aesthetics.

Artist

Read More

By

Building Superior Code … Is It Achievable?

Editorial Note: this post was originally written for the SmartBear blog.  You can read the original here, on their site.  As I’ve mentioned before, they have some good authors that I respect writing over there, so give it a look!

I’ve made stops in a lot of software development shops in my career, both as an employee and as a consultant. This has afforded me to the opportunity to learn that some questions and concerns are universal in the industry. One such question, asked by Fortune 500 CTOs and tiny startups alike, is “How do I make sure we have good code?” If that seems like it’d be hard to answer, rest assured that it definitely is. As much collective practice as the world has writing software, we’ve not managed to agree on the answer to a deceptively difficult question: what is good code?

David Starr wrote about this topic some time back in an excellent article entitled “Defining Code Quality.” It’s hard to opine about how a software group can have and maintain superior code quality when, as David points out, it’s difficult even to reach consensus on what it means to have superior code quality. So let’s talk about that first.

Unicorn

Read More

By

Do Programmers Practice Computer Science?

I’ve gotten some questions about the idea of what we do being scientific or not, and that raises some interesting discussion points.  This is a “You Asked for It” post, and in this one, I’m just going to dive right into the reader question.  Don’t bury the lead, as it were.

Having attended many workshops on Agile from prominent players in the field, as well as working in teams attempting to use Agile I can’t help but think that there is nothing Scientific about it at all. Most lectures and books pander to pop psychology, and the tenants of Agile themselves are not backed up by any serious studies as far as I’m aware.

In my opinion we have a Software Engineering Methodology Racket. It’s all anecdotes and common sense, with almost no attention on studying outcomes and collecting evidence.

So my question is: Do you think there is a lack of evidence based software engineering and academic rigor in the industry? Do too many of us simply jump on the latest fad without really questioning the claims made by their creators?

I love this.  It’s refreshingly skeptical, and it captures a sentiment that I share and understand.  Also notice that this isn’t a person saying, “I’ve heard about this stuff from a distance, and it’s wrong,” but rather, “I’ve lived this stuff and I’m wondering if the emperor lacks clothes.”  I try to avoid criticizing things unless I’ve lived them, myself, and I tend to try something if my impulse is to criticize.

ScientificMethod

I’ll offer two short answers to the questions first.  Yes and yes.  There is certainly a lack of evidence-based methodology around what we do, and I attribute this largely to the fact that it’s really, really hard to gather and interpret the evidence.  And, in the absence of science, yes, we collectively tend to turn to tribal ritual.  It’s a God of the Gaps scenario; we haven’t yet figured out why it rains, so we dance and assume that it pleases a rain God. Read More

By

What Is a Best Practice in Software Development?

(Editorial Note: Hello, Code Project, folks and thanks for stopping by! I’m really excited about the buzz around this post, but an unintended consequence of the sudden popularity is that so many people have signed up that I’ve run out of Pluralsight trial cards. Please still feel free to sign up for the mailing list, but understand that it may be a week or so before I can get you a signup code for the 30 day trial. I will definitely send it out to you, though, when I get more. If you’d like to sign up for a 10 day trial, here is a link to the signup that’s also under my courses on the right.)

A while ago, I released a course on Pluralsight entitled, “Making the Business Case for Best Practices.”  (If you want to check it out, but don’t have a Pluralsight account, sign up for my mailing list in the side bar to the right and I’ll send you a free 30 day subscription).  There was an element of tongue-in-cheek to the title, which might not necessarily have been the best idea in a medium where my profitability is tied to maximizing the attractiveness of the title.  But, life is more fun if you’re smiling.

Anyway, the reason it was a bit tongue in cheek is that I find the term “best practice” to be spurious in many contexts.  At best, it’s vague, subjective, and highly context dependent.  The aim of the course was, essentially, to say, “hey, if you think that your team should be adopting practice X, you’d better figure out how to make a dollars and cents case for it to management — ‘best’ practices are the ones that are profitable.”  So, I thought I’d offer up a transcript from the introductory module of the course, in which I go into more detail about this term.  The first module, in fact, is called “What is a ‘Best Practice’ Anyway?”

Best Practice: The Ideal, Real and Cynical

The first definition that I’ll offer for “best practice” is what one might describe as the “official” version, but I’ll refer to it as the “ideal version.”  Wikipedia defines it as, “method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark.”  In other words, a “best practice” is a practice that has been somehow empirically proven to be the best.  As an example, if there were three possible ways to prepare chicken: serve it raw, serve it rare, and serve it fully cooked, fully cooked would emerge as a best practice as measured by the number of incidents of death and illness.  The reason that I call this definition “ideal” is that it implies that there is clearly a single best way to do something, and real life is rarely that neat.  Take the chicken example.  Cooked is better than undercooked, but there is no shortage of ways to fully cook a chicken – you can grill it, broil it, bake it, fry it, etc.  Is one of these somehow empirically “best” or does it become a matter of preference and opinion?

Barbecue

Read More

By

Avoiding the Perfect Design

One of the peculiar ironies that I’ve discovered by watching the way a lot of different software shops work is that the most intense moments of exuberance about software seem to occur in places where software development happens at glacial speeds. If you walk into an agile shop or a startup or some kind of dink-and-dunk place that bangs out little CRUD apps, you’ll hear things like, “hey, a user said she thought it’d be cool if she could search her order history by purchase type, so let’s throw that in and see how it goes.” If it goes insanely well, there may be celebrations and congratulations and even bonuses changing hands which, to be sure, makes people happy. But their happiness is Mercury next to the blazing Sun of an ivory tower architect describing what a system SHALL do.

“There will be an enterprise service bus. That almost goes without saying. The presentation tier and the business tier will be entirely independent of one another, and literally any sort of pluggable module that you dream up as a client can communicate with any sort of rules engine embedded within the business tier. Neither one will EVER know about the other’s existence. The presentation layer collaborators are like Schrodinger and the decision engines are like the cat!

And the clients. Oh yes, there will be clients. The main presentation tier client is a mobile staging environment that will be consumed by Android, iOS, Windows Phone, Blackberry, and even some modified Motorolla walkie-talkies. On top of the mobile staging environment will be a service adapter that makes it so that clients don’t need to worry about whether they’re using SOAP or REST or whatever comes next. All of those implementations will hide behind the interface. And that’s just the mobile space. There are more layers and subtleties in the browser and desktop spaces, since both of those types of clients may be SPAs, other thick clients, thin clients, or just leaf nodes.

Wait, wait, wait, I’m not finished. I haven’t even told you about the persistence factories yet and my method for getting around the CAP theorem. The performance will be sublime. We’re talking picoseconds. We’re going to be using dynamically generated linear programming algorithms to load balance the vertical requests among the tiers, and we’re going to take a page out of the quantum computing book to introduce a new kind of three state boolean… oh, sorry, you had a question?”

“Uh, yeah. Why? I mean, who is going to use this thing and what do they want with it?”

“Everyone. For everything. Forever.”

You back out slowly as the gleam in his eye turns slightly worrisome and he starts talking about the five year plan that involves this thing, let’s call it HAL, achieving sentience, bringing humankind to the cusp of the singularity, and uploading the consciousnesses of all network and enterprise architects.

Singularity

Like I said, the Sun to your Mercury. Has your puny startup ever passed the Turing Test? Well, his system has… as spec’ed out in a document management system with 8,430 pages of design documents and a Visio diagram that’s rumored to have similar effects to the Ark of the Covenant. And that, my friends, is why I think that a failing ATDD scenario should be the absolute first thing anyone who says, “I want to get into programming” learns to do.

Now to justify that whiplash-inducing segue. I wrote a book about unit testing in which I counseled complete initiates to automated testing to forgo TDD and settle for understanding the mechanics of automated tests and test runners first before making the leap. I stand by that advice, but I do so because I think that there is a subtle flaw to the way that most people currently get started down the programming path.

I was watching a Pluralsight course about NUnit to brush up on their latest and greatest assertion semantics, and the examples were really well done. In particular, there was a series of assertions oriented around a rudimentary concept of a role playing game with enumerations of weapons, randomization of events, and hit points. This theme exercised concepts like ranges, variance, collection cardinality, etc and it did so in a way that lent itself to an easy mental model. The approach was very much what mine would have been as well (I wouldn’t have come at this with TDD because there’d have been a lot of ‘downtime’ writing the production code as opposed to just showing the assert methods).

Nevertheless, it’s been a while since I’ve watched someone write tests against a pre-baked system when they weren’t characterization tests in a legacy rescue, and the experience was sort of jarring. I couldn’t help but think, “I wouldn’t want to write these tests if I were in his position — why bother when the code is already done?” Weird as it sounds from a big advocate of testing, writing tests after you’ve completed your implementation feels like a serious case of “going through the motions” in the same way that developers fill out random “SDLC” artifacts for no other purpose than to get PMPs to leave them alone.

And that’s where the connection to the singularity architect comes in. One of the really nice, but subtle perks of the TDD (especially ATDD) approach is that it forces you to define exit criteria before you start doing things. For instance, “I know I’ll be done with this development effort when my user can search her order history by purchase type.” Awesome — you’re well on your way because you’ve (presumably) agreed with stakeholders ahead of time when you can stop coding and declare victory. The next thing is to prove it, and you can approach this in the same way that you might approach fixing a leaking pipe under your sink. Turn the water on, observe the leak, turn the water off, fix the leak, turn the water back on, observe that there is no leak. Done.

In the case of the search, you write a client call to your web service that supplies a “purchase type” parameter and you say that you’re done when you get a known result set back, instead of the current error message: “I do not understand this ‘purchase type’ nonsense you’ve sent — 400 for you!” Then you scurry off to code, and you just keep going until that test that you’ve written turns green and all of the other ones stay green. There. Done, and you can prove it. Ship it.

Our poor architect never knows when he’s done (and we know he’ll never be done). The origin of this Sisyphean struggle started with hobby programming or a CS degree or something. It started with unbounded goals and the kinds of open-ended tasks that allow hobbyists to grow and students to excel. Alright, you’ve got the A, but try to play with it. See if you can make it faster. Try adding features to it. Extra credit! Sky’s the limit! At some point, a cultural norm emerges that says it’s more about the journey than the destination. And then you don’t rise through the ranks by automating for the sake of solving people’s problems but rather by building ever-more impressive juggernauts, leveraging the latest frameworks, instrumented with the most analytics, and optimized to run in O(Planck Time).

I really would like to see initiates to the industry learn to set achievable (but slightly uncomfortable) goals with a notion of value and then reach them. Set a beneficial goal, reach it, rinse, repeat. The goal could be “I want to learn Ruby and I’ll consider a utility that sorts picture files to be a good first step.” You’re adding to your skill set as a developer and you have an exit criteria It could be something for a personal project, a pro-bono client, or for pay. But tie it back to an outcome and assess whether that outcome is worthwhile. This approach will prevent you from shaving microseconds off of an app that runs overnight on a headless server and it will prevent you from introducing random complexity and dependency to an app because you wanted to learn SnazzyButPointless.js. True, the approach will stop you from ever delighting in design documents that promise the birth of true artificial intelligence, but it will also prevent you from experiencing the dejection when you realize it ain’t ever gonna happen.