How to Get an Edge As a Consultant
Editorial Note: I originally wrote this post for the NDepend blog. You can check out the original here, at their site. While you’re there, have a look around at some of the documentation around code metrics and queries.
I’ve made no secret of, and even frequently referred to my consulting practice, including aspects of IT management consulting. In short, one of my key offerings is to help strategic decision makers (CIOs/CTOs, dev managers, etc) make tough or non-obvious calls about their applications and codebases. Can we migrate this easily to a new technology, or should we start over? Are we heading in the right direction with the new code that we’re writing? We’d like to start getting our codebase under test, but we’re not sure how (un) testable the code is — can you advise?
This is a fairly niche position that’s fairly high on the organizational trust ladder, so it’s good work to be had. Because of that, I recently got a question along the lines of, “how do you get that sort of work and then succeed with it?” In thinking about the answer, I realized it would make a good blog post, specifically for the NDepend blog. I think of this work as true consulting, and NDepend is invaluable to me as I do it.
Before I tell you about how this works for me in detail, let me paint a picture of what I think of as a market differentiator for my specific services. I’ll do this by offering a tale of two different consulting pitfalls that people seem to fall into if tasked with the sorts of high-trust, advisory consulting engagements.
The Hand-Wavers (Business-Focus Only)
The first problem is probably the most common one that I see in my travels. What happens here is that a company with an important strategic decision looming brings in outside expertise for help. So far, so good. Seeking an outside, expert opinion is an entirely rational thing to do and probably something that you do when, say, confronted with the decision about whether to replace your water heater that’s acting up or to simply repair it.
When the consultants arrive, however, they fall into the trap of what amounts to easy answers and an unwitting conflict of interest. To get more concrete about it, imagine that a CTO calls you in and says to you, “we’re having a lot of trouble with this old app, and we’re thinking we might need to rewrite it with newer technologies and that we might need outside help with that.” You agree to do an assessment of their old app, and you feel good about this, since you’re representing a firm that has used both the new and old technologies in question.
What do you think is the most likely outcome of your initial engagement? Do you come in and say, “wow, whoever wrote this 10 year old code sure knew what they were doing — I bet this could last forever!” Or do you come in, take a perfunctory look at the code, allow your disdain for the old tech to color your opinion, and recommend that they move immediately to a new, latest and greatest approach. Oh, and you just happen to be available to lead that effort, by the way.
Countless consultants and consultancies fall into this trap, and it’s not bad faith — it’s just a lack of recognition for a conflict of interest combined with the confidence that tends to follow in the wake of consultants. “We’re here, we’re the experts, you’ve called us in because you’re not, so let’s throw out your cute attempts at solving the problem and get serious.” Sadly, for a lot of organizations, this results in wave after wave of consultants coming through over the course of years, telling them them that they (and all previous consultants) are doing it wrong.
This approach tends to fall short because it’s not particularly technical and it’s not at all data-centric. It’s more of a matter of reading what the client expects and parlaying it into future business.
The Technicians (Tech Focus Only)
The second pitfall is less common, but it happens. In fact, I’ve had front row seats for it in the past. I personally witnessed highly technical people explaining to mid-level managers that their teams had code with cyclomatic complexity too high. The response from these non-technical, management people was predictable. “Uh, okay. So, we should make that — what was it, complexity — we should make that lower, right? How much lower? And how do we do that? Can you do it for us?”
Whereas the hand-wavers key only off the leanings and tells of the business folks, the technicians ignore them completely. “The problem with your code is that it’s too complex, and we can help you with that by installing a tool that warns you and then going through and performing extract method and extract class refactorings alongside the strategy and command design patterns along with some –”
“And this helps us save money and meet future deadlines how…?” a confused manager might gently interrupt.
Explaining the Technical to the Non-Technical
There are two things that I tend to do differently than these consulting firms. First, I’m not looking at assessment and advisory gigs as extended sales pitches for my other services, so that helps me avoid the conflicts of interest that can arise.
The second difference-maker is where NDepend prominently enters the discussion. I perform highly technical analysis and use NDepend to translate the results into discussions about business outcomes. NDepend is a highly technical tool that’s aimed at a technically proficient audience, but it produces output that can be used seamlessly in business conversations.
For a quick example of what I mean, imagine how this goes over in a meeting with VPs and managers. “The intended project architecture diagrams that you’ve been shown look like an orderly, layered approach that would make for easy changes in the future. The trouble is, here is what the actual dependency graph looks like.”
That is powerful. Pictures are worth a thousand words, and NDepened offers a lot of them. Dependency graphs, metric heat charts, and the famous “Zone of Pain, Zone of Uselessness” diagram are all great ways to show management things that can be hard to say without an image.
But it goes beyond that as well. NDepend’s CQLinq, out of the box metrics, and customizability allow you virtually unlimited ability to ask questions of codebases. The code becomes data and, equipped with NDepend, you become like a skilled DBA in terms of calling up and examining that data. The trick here, however, is not in simply telling them about the raw data, the way our technician did with cyclomatic complexity. The trick lies in taking that raw data and using it to tell a compelling business story.
High cyclomatic complexity is bad not because it is somehow intrinsically bad. Rather, high cyclomatic complexity means that there are paths through the code about which no one has reasoned and for which no one has tested, and this means a higher likelihood of unexpected runtime behavior. And unexpected runtime behavior translates to a business explanation of “you’re going to release software and then be confronted with bugs that cause your developers to say, ‘that shouldn’t even be possible’ and then to spend extra time chasing down a fix.” Management doesn’t care a whit for cyclomatic complexity, but they definitely care that they’re likely to have defects upon release that stay open longer than anyone expects.
When it comes to establishing a competitive advantage as a consultant, it certainly helps to have spent a good bit of time in both developers’ and managers’ shoes. But whether you’ve done that or not, the real key to success is being able to help the two establish a common framework for discussing software goals and challenges. And NDepend is an absolute must for facilitating that framework and starting those conversations.