DaedTech

Stories about Software

By

Using NDepend to Make You a Better Programmer

This is another post that I originally wrote for the NDepend blog. If you haven’t yet, go check out the NDepend blog and sign up for the RSS feed. It’s relatively new, but we’ll have a lot of good content there for you.

If you’re a software developer, particularly of the newly minted variety, the concept of static analysis might not seem approachable.  It sounds academic.  It sounds architect-y.  It sounds complicated.  I’ve seen this reaction from a lot of people in my career and I think that’s too bad.

If you delve into its complex depths, static analysis can be any and all of these things, but with the developers I mentor and coach, I like to introduce it as a game that makes you better at what you do.  You can use static analysis to give yourself feedback about your code that is both fast and anonymous, allowing you to improve via trial and error, rather than by soliciting feedback from people much more tenured than you and sometimes wincing as they lay into you a little.  And, perhaps best of all, you can calibrate the quality of your code with the broader development world, rather than just pleasing the guy who has hung around your company long enough to default his way into the “tech lead” role.

NDepend Rules

Take a look at some of the feedback that NDepend offers about your code.  “That method is too big” isn’t particularly intimidating, is it?  I mean, you might wonder at what you could do to compact a method, but it’s not some kind of esoteric rule written in gibberish.  You run NDepend on your code and you can see that there is some number of methods that the broader development community considers to be “too big.”

From there, you can start looking at ways to write smaller methods and to refactor some of your current ones to sneak in under the warning number.  This is the essence of gamification — you change the way you write code to get rid of the warnings.  You get better.  And it’s gratifying.

As you do this, another interesting thing starts to happen.  You start noticing that other developers continue to write large methods and when you run NDepend on their code, they light up the console with errors, whereas you do not with your code.  And so, you can have conversations with them that start with, “you know, this static analysis tool I’ve been using wants us to have smaller methods, and I’ve been working a lot on that, if you ever want a hand.”

You gain a reputation as being knowledgeable.  Before you know it, you can cite widely accepted static analysis rules and the design goals they imply.  You know these rules, and, via gamification, you have experience molding code to comply with them.  Even in cases where you might wind up overruled by the local team lead or architect, it’s no longer a simple matter of that person saying, “because I said so,” and just ending the conversation.  They have to engage with you and present cogent counter-arguments to your points.  You’re participating in important discussions in ways that you never have before.

If it sounds like I’m speaking from experience, I am.  Throughout my career, I’ve been relentless about figuring out ways to improve my craft, always trying to be a better programmer.  Early on, I was unsatisfied with a lot of arguments among developers around me that I knew boiled down to nothing more than personal preference, so I went out in search of empirical methods and broader knowledge, and that search brought me to static analysis.  I read about data and science behind particular choices in approaching software, and I schooled myself to adopt the approaches that had brought the best results.

Somewhere along that journey, I discovered NDepend and its effect on my approach to writing code was profound.  My methods shrank and became less complicated.  My architectural and design skills improved as I made it a point to avoid dependency cycles and needless coupling.  I boosted unit test coverage and learned well established language practices.  It was not long before people routinely asked me for design advice and code reviews.  And from there, it wasn’t long before I occupied actual lead and architect roles.

So, if you want to improve your craft and nudge your career along, don’t pass on static analysis, and don’t pass on NDepend.  NDepend is not just a tool for architects; it’s a tool for creating architects from the ranks of developers.  You’ll up your game, improve your craft, and even have some fun doing it.

By

Who Accepts Your Team’s Academy Awards?

I was listening to the Smart Passive Income podcast the other night. Yeah, I wasn’t kidding. I’m really trying to figure out how to do this stuff. Anyway, it was an episode with “User Stories” in the title, so I was intrigued. What I actually thought to myself was, “I’m a lot more inclined to hear stories about passive income than about Scrum, but this could be interesting!” And, it actually was interesting. I mean that earnestly. The episode was about Pat commissioning an IOS app for his podcast, so anyone making a living in our industry would be somewhat intrigued.

The episode started, and I listened. Admittedly, beyond Pat, I don’t exactly know who the players are, but I can tell you what I inferred as I was jogging (I frequently listen to podcasts when I jog). The interview started, and Pat was talking to someone that seemed to have a project-manager-y role. Pat asked about the app, and the guest talked about communication, interactions, and the concepts of “user story” and “product backlog.” He didn’t actually label this process Scrum until much, much later in the interview, and I get that – he’s talking to a huge audience of potential clients, so it’s a lot more compelling to describe Scrum as if it were something he thought of than it is to say, “oh yeah, we do Scrum – go google it!”

LeaderSpeaker

I don’t begrudge him that in the slightest. It’s a savvy approach. But it did strike me as interesting that this conversation about an app started with and centered around communication and planning. The technical decisions, data, and general nuts and bolts were all saved for later, delegated to a programmer underling, and framed as details that were definitely less relevant. In the development of this app, the important thing was the project manager, who he talked to, and when he talked to them. The development of the app was a distant second.

My reaction to this, as I jogged, was sad familiarity. I didn’t think, “how dare that project manager steal the show!” I thought, “oh, naturally, that’s a project manager stealing the show – that’s more or less their job. Developer code, not know talk human. Project manager harness, make use developer, real brains operation!”

Read More

By

Chess TDD 44: Starting the Climb toward En Passant

En passant is going to be a fairly complicated thing to calculate, given the way I’ve implemented this thing so far.  And, true to form, I only got a very thin slice going in this episode.  Still, it was measurable progress and it’s good that I was able to slice thinly.

What I accomplish in this clip:

  • Fixed a mistake in one of my tests that a viewer pointed out.
  • Got the first en passant test passing.

Here are some lessons to take away:

  • As always, peer review is king.
  • Finding a way to carve thin slices off of large problems is an art form and so important.  Without this, it’s easy to be overwhelmed by difficult problems.
  • When you’re writing a test and you see an unexpected behavior from production code, stop and clarify your understanding.  You don’t want to procrastinate with that.
  • Edge cases account for a lot of complexity in design, which makes it doubly important to have a comprehensive regression test suite.
  • Revisiting a design is really hard without automated tests to cover what you’re doing.  This tends to cause designs to calcify in untested codebases, even to the point of avoiding the addition of new functionality that users want.

By

Signs Craftsmanship May Be For You

One of the things I’ve spent a good bit of time doing over the last year or so is called “Craftsmanship Coaching.” This involves going into teams and helping them adopt practices that will allow them to produce software more reliably and efficiently. Examples include writing automated unit and acceptance tests, setting up continuous integration and deployment, writing cleaner, more modular code, etc. At its core though, this is really the time-honored practice of gap analysis. You go in, you see where things could be better, and you help make them better.

Using the word “craftsmanship” to describe the writing of software is powerful from a marketing perspective. Beyond just a set of practices revolving around XP and writing “good code,” it conjures up an image of people who care about the practice of writing software to the point of regarding it as an art form with its own sort of aesthetic. While run-of-the-mill 9–5ers will crank out code and say things like, “if it ain’t broke, don’t fix it,” software craft-people will presumably agonize over the smallest details, perfecting their code for the love of the game.

Friendlies

The drawback with using a term like “software craftsmanship” is the intense subjectivity and confusion of what exactly it entails. One person’s “well crafted code” might be another’s spaghetti, not to mention that subjective terms tend to get diluted by people wanting, merited or not, to be in the club. To understand what I mean, consider the practice of scheduling a daily status meeting, calling it “daily Scrum,” and declaring a shop to be “agile.”

How then are software developers who are not associated with the software craftsmanship movement to know whether they should want in or not? How are they even to know what it is? And if they don’t easily know, how are overhead decision makers like managers to have any clue at all? Well, let’s momentarily forget about the idea of software craftsmanship and return to the theme of gap analysis. In the rest of this post, I’ll describe signs that you could stand to benefit from some of the practices that I help clients with. If you notice your team experiencing these things, the good news is that you can definitely simplify your life if you pursue improvements.

Similar Features Take Longer and Longer to Implement

Remember a simpler time when adding a page to your site took a few hours, or maybe a day, max? Now, it’s a week or two. Of course, that makes sense because now you have to remember to implement all of the security stuff, and there’s the validation library for all of the input controls. And that’s just off the top. Let’s not forget the logging utility that requires careful edits to each method, and then there’s the checklist your team put together some time back that you have to go through before officially promoting the page. Everyone has to think about localization, checking the color scheme in every browser, and so on and so forth. So it’s inevitable that things will slow down, right?

Well, no, it’s not inevitable at all. Complexity will accrue in a project as time drifts by, but it can be neutralized with carefully considered design approaches. The examples that I mentioned, such as security and logging, can be implemented in such a way within your application that they do not add significant overhead at all to your development effort. Whatever the particulars, there are ways to structure your application so that you don’t experience significant slowdown.

Simple Functionality Requests Are Anything But Simple

  • “Hey, can you change the font on the submit button?”
  • “Not without rewriting the whole presentation layer!”
  • “I don’t understand. That doesn’t seem like it should be hard to do.”
  • “Well, look, it is, okay? Software is complicated.”

Have you ever participated in or been privy to a conversation like this? There’s something wrong here. Simple-seeming things being really hard is a smell. Cosmetic changes, turning off logging, adding a new field to a web page, and other things that strike non-technical users as simple changes should be simple, generally speaking.

While clearly not a universal rule, if a vast gulf routinely appears between what common sense says should be simple and how hard it turns out to be, there is an opportunity for improvement.

Until Next Time

I originally wrote this post for the Infragistics blog and you can find the original here. There is also a second part to this post, as well.

By

Promote Yourself to Manager so that You Can Keep Writing Code

A while back, I announced some changes to DaedTech with idea of moving toward a passive income model. In the time between then and now, I’ve spent a good bit of time learning about techniques for earning passive income, and I’ve learned that I’m really, really bad at it. For example, I’m often asked for recommendations, and I respond by supplying them, as most decent humans would. This is wrong. What I should do is have a page on my site with all of my recommended and favorite tools and the page should link to them via affiliate links. I provide the same recommendations and earn a bit of money. Win-win.

Well, I’ve been halfheartedly working on this page for a bit. Believe it or not, the most difficult part of this is seeking out and obtaining the affiliate links. So, my page of recommendations remains a work in progress. And I was making progress tonight, securing affiliate links, when inspiration struck for a blog post about one particular affiliate. Most of the affiliates that I’ve identified are productivity tools, editors, and other techie goodies, but this one is different. This one represents an entirely different way of thinking for techies.

As a free agent, content creator, and product creator, I have a lot of metaphorical juggling balls in the air, and I’ve had to become hyper-productive and downright ruthless when it comes eliminating unnecessary activities. I don’t watch TV, I don’t go out much, I don’t take any days off of working, even on vacation, and I don’t really even follow the news anymore. Pretty much every conceivable bit of waste has been excised from my life, and I do a lot of work on an hourly or value basis. This has resulted in a whole new world of ROI calculations appearing before me — it’s worth paying premiums to save myself time so that I can spend that time earning more money than I spend.

LotsLeftToDo

Read More

By

Your Code Is Data

This is a post that I originally wrote for the NDepend blog. If you haven’t already, go check it out! We’re building out some good content over there around static analysis, with lots more to follow.

A lot of programmers have some idea of what static analysis is, as least superficially.  If I mention the term, what pops into your head?  Automatic enforcement of coding standards?  StyleCop or FXCop?  Cyclomatic complexity and Visual Studio’s “maintainability index?”  Maybe you’re deeply familiar with all of the subtleties and nuances of the technique.

Whatever your level of familiarity, I’d like to throw what might be a bit of a curve ball at you.  Static analysis is the idea of analyzing source code and byte code for various properties and reporting on those properties, but it’s also, philosophically, the idea of treating code as data.  This is deeply weird to us as application developers, since we’re very much used to thinking of source code as instructions, procedures, and algorithms.  But it’s also deeply powerful.

ComputerInACage

When you think of source code this way, typical static analysis use cases make sense.  FXCop asks questions along the lines of “How many private fields not prepended with underscores,” or, perhaps, “SELECT COUNT(class_field) FROM classes WHERE class_field NOT LIKE ‘_*’”  More design-focused source code analysis tools ask questions like “What is the cyclomatic complexity of my methods,” or, perhaps, “SELECT cyclomatic_complexity FROM Methods.”

But if code is data, and static analysis tools are sets of queries against that data, doesn’t it seem strange that we can’t put together and execute ad-hoc queries the way that you would with a relational (or other) database?  I mean, imagine if you built out some persistence store using SQL Server, and the only queries you were allowed were SELECT * from the various tables and a handful of others.  Anything beyond that, and you would have to inspect the data manually and make notes by hand.  That would seem arbitrarily and even criminally restrictive.  So why doesn’t it seem that way with our source code?  Why are we content not having the ability to execute arbitrary queries?

I say “we” but the reality is that I can’t include myself in that question, since I have that ability and I would consider having it taken away from me to be crippling.  My background is that of a software architect, but beyond that, I’m also a software craftsmanship coach, teacher, and frequent analyzer of codebases in a professional capacity, auditing a wide variety of them for various properties, characteristics, and trends.  If I couldn’t perform ad-hoc, situation-dependent queries against the source code, I would be far less effective in these roles.

My tools of choice for doing this are NDepend and its cousin JArchitect (for Java code bases).  Out of the box, they’re standard static analysis and architecture tools, but they also offer this incredibly powerful concept called CQLinq that is, for all intents and purposes, SQL for the ‘schema’ of source code.  In reality, CQLinq is actually a Linq provider for writing declarative code queries, but anyone that knows SQL (or functional programming or lamba expressions) will feel quite at home creating queries.

Let’s say, for instance, that you’re the architect for a C# code base and you notice a disturbing trend wherein the developers have taken to communicating between classes using global variables.  What course of action would you take to nip this in the bud?  I bet it would be something annoying for both you and them.  Perhaps you’d set a policy for a while where you audited literally every commit and read through to make sure they weren’t doing it.  Maybe you’d be too pressed for time and you’d appoint designated globals cops.  Or, perhaps you’d just send out a lot of angry, threatening emails?

Do you know what I would do?  I’d just write a single CQLinq query and add it to a step in my automated team build that executed static analysis code rules against all commits.  If the count of global variable invocations in the code base was greater after the commit than before it, the build would fail.  No need for anger, emails or time wasted checking over people’s shoulders, metaphorically or literally.

Want to see how easy a query like this would be to write?  Why don’t I show you…

That’s it. I write that query, set the build to run NDepend’s static analysis, and fail if there are warnings. No more sending out emails, pleading, nagging, threatening, wheedling, coaxing, or bottleneck code reviewing. And, most important of all, no more doing all of that and having problems anyway. One simple little piece of code, and you can totally automate preventing badness. And best of all, the developers get quick feedback and learn on their own.

As I’ve said, code is data at its core.  This is especially true if you’re an architect, responsible for the long term health of the code base.  You need to be able to assess characteristics and properties of that code, make decisions about it, and set precedent.  To accomplish this, you need powerful tooling for querying your code, and NDepend, with its CQLinq, provides exactly that.

By

The Secret to Fighting Buzzword Fatigue

A little while back, I made a post in which I mused about the work-retire dynamic as an unusual example of large batches in life. In the lead-in, I made passing reference to a post where I talked more specifically about buzzword fatigue. This is that post (with this explanatory paragraph pre-pended, of course).

It feels amazing, in an odd way, to give something a good name. You have to know what I mean. Have you ever sat around a whiteboard with a few people, tossing out names for some kind of module or concept or whatever, scrunching your nose and shaking your head slightly at each suggestion? “No, that’s almost right, but I don’t think that’s it.” And then finally, someone tosses out, “let’s call it the clobbering factory!” and all of your eyes go wide as someone else yells, “yes!!”

HighFive

Names are important. There’s a certain finality to naming something, even when you wish it weren’t the case. Have you ever failed in the quest for the perfect name, only to say something like, “aw, screw it, let’s just call it ‘circle’ since it’s a circle on the whiteboard, and we’ll rename it later?” If you have, you can’t tell me that the thing’s official name isn’t still “circle,” even 3 years and 23 production releases later. You probably even once tried to rename it, grousing at people that refused to start calling it “The Phoenix Module” in spite of your many, many, reminder emails. It stayed “circle” and you gave up.

There’s an element of importance to naming that goes beyond simple aesthetics, however, when you’re naming a concept. Products, bits of code and other tangible goodies have it easy because you can always point at what you’re talking about and keep meaning from drifting. With concepts… not so much. Next to their tangible cousins, they’re like unmoored boats in a river and they will drift.

And I think that the amount to which they drift is controlled by two main factors:

  1. Uniqueness
  2. Mappability to known concepts in context

Read More

By

Chess TDD 43: Pawns Good to Go

This episode was a lot of fun because all of the cards just kind of fell into place and I got the pawn done (with the exception of en passant). I had thought finishing up the pawn was going to take a number of episodes, but then there was a flurry of win. I’ll take it!

What I accomplish in this clip:

  • Finished up the implementation of black pawn movement.
  • Pretty well set with acceptance tests for pawn.

Here are some lessons to take away:

  • Be on the lookout in your code for overly complicated boolean conditions; always look to simplify.
  • If you can avoid creating more levels of inheritance and that sort of indirection, you should.  That sort of thing can be a helpful tool, but you pay a price in complexity.

By

Let’s Put Some Dignity Back into Job Seeking

Alphabet Soup

I’ve seen a lot of resumes of late, so I can’t be sure where I saw this, exactly. I suppose it doesn’t really matter. This one resume really stood out to me, though, because it was perhaps the most self-aware talisman of the ceaseless employment quest that I’d ever seen. Specifically, one part of it was the self-aware part, and that came right at the end, under the simple heading “technologies.”

If you opened the PDF file of the resume, scanned down past heading info, work experience, and education, there was this bolded heading of “technologies,” followed immediately by a colon and then a comma-delimited list of stuff. It had programming languages, frameworks, design patterns, concepts, and acronyms. Oh, there were acronyms as far as the eye could see, I tell ya – the streets were paved with ‘em. (Well, they filled out the rest of the page, anyway).

It practically screamed, “this seems stupid, but someone told me to do this, so here-ya-go.” I’ve seen this before (and even done a version of it myself), but it was always organized somehow into categories or something to make it seem like manicured, useful information. This resume abandoned even that thin pretense.

Obviously, I didn’t look through this section in any great detail. I think neither I nor the resume’s owner would have considered it important to evaluate why he’d hastily typed “UML” in between some of those other things. It didn’t matter to either of us what was in that section, and, truth be told, I’d be surprised if he even knew everything that was in there.

I contemplated this idly for a bit, and then it occurred to me how similar this felt to the obligatory job description where a company lists 25 technologies under “requirements” and then another 15 under “nice to have.” UML is probably nice for everyone to have. Both job seeker and company probably list it and neither one probably knows it, making all parties better off even with a bit of mutual fibbing.

Applicants list things they don’t know because companies claim needs that they don’t have, and, in the end, the only one who profits from this artificially large surface area is the recruitment industry as a whole. The more turnover and churn, the more placements and paydays. The way the whole thing works is actually pretty reminiscent of a low quality dating website. Everyone on it lists every one of their virtues in excruciating detail, omits every one of their weaknesses, and exudes ludicrous pickiness in what they seek. Matches are only made when lies are told, and disappointment is inevitable. When people inevitably get tired of failure and settle for a mate, it’s random rather than directed.

Objection

Gah.  How depressing.  Let’s not do that anymore.  Let’s look for mutual fit instead of blind prospect maximizing on both sides.  We don’t want hundreds of potential employers or candidates.  We want a single one that’s well suited.

Read More

By

Chess TDD 42: Finishing up White Pawn Movement

In this episode, fresh off the victory of getting pawn movement right for the white pawns, I start on the black ones by essentially reversing their movement.

What I accomplish in this clip:

  • Got the first acceptance test passing for black pawn movement.

Here are some lessons to take away:

  • Having one context per test class is a nice way to keep tests readable, focused, and organized.
  • You’ll probably never stop making dumb mistakes, so it’s good to learn to have a sense of humor about it.
  • Tests are very handy for confirming your understanding of the code base.  Feel free to tweak a value in a test just to see what will happen, and then put it back.
  • Instead of hopping quickly into the debugger, see if you can try process of elimination things to narrow down where the problem is.
  • If you find yourself in a class, typing the same conditional logic in every method, you have something that could probably be two classes.

Acknowledgements | Contact | About | Social Media