Thinking about how to prevent big system project failure has somehow always reminded me of the Will Rogers quote: “Don’t gamble; take all your savings and buy some good stock and hold it till it goes up, then sell it. If it don’t go up, don’t buy it.”
In other words, with big projects, by the time you realize it’s failed, it’s pretty much too late. Let’s think a bit about the reasons why, and what we can do to change that.
First off, I’ve never seen a big project fail specifically because of technology. Ever. And few IT veterans will disagree with me. Instead, failures nearly always go back to poor communication, murky goals, inadequate management, or mismatched expectations. People issues, in other words.
So much for that admittedly standard observation. But as the old saying goes, “everyone complains about the weather, but no one does anything about it.” What, then, can we actually do to mitigate project failure that occurs because of these commonplace gaps?
Of course, that’s actually a long-running theme of this blog and several other key blogs that cover similar topics. (see my Blogroll to the right of this post). Various “hot stove lessons” have taught most of us the value (indeed, necessity) of fundamental approaches and tools such as basic project management, stakeholder involvement and communication, executive sponsorship, and the like. Those approaches provide some degree of early warning and an opportunity to regroup; they often prevent relatively minor glitches from escalating into real problems.
But it’s obvious that projects still can fail, even when they use those techniques. People, after all, are fallible, and simply embracing an approach or methodology doesn’t mean that all the right day-to-day decisions are guaranteed or that every problem is anticipated. Once again, there are no silver bullets.
One of the problems, as I’ve pointed out before, is that it can actually be surprisingly difficult to tell, even from the inside, how well a project is going. Project management documents can be appearing reliably, milestones met, etc. Everything looks smooth. Yet, it may be that the project is at increasingly large risk of failure, because you can’t address problems you haven’t identified. This is particularly so because the umbrella concept of “failure” includes those situations where the system simply won’t be adopted and used by the target group, due to various cultural or communication factors that have little or nothing to do with technology or with those interim project milestones.
Moreover, every project has dark moments, times when things aren’t going well. People get good at shrugging those off, sometimes too good. Since people involved in a project generally want to succeed, they unintentionally start ignoring warning signs, writing those signs off as normal, insignificant, or misleading.
I’ve been involved in any number of huge systems projects, sometimes even “death march” in nature. In many of them, I’ve seen the following kinds of dangerous “big project psychologies” and behaviors set in:
- Wishful thinking – we’ll be able to launch on time, because we really want to
- Self-congratulation – we’ve been working awfully hard, so we must be making good progress
- Testosterone – nobody’s going to see us fail. We ROCK.
- Doom-and-gloom fatalism – we’ll just keep coming in every day and do our jobs, and what happens, happens. (See Dilbert, virtually any strip).
- Denial – the project just seems to be going badly right now; things are really OK.
- Gridlock – the project is stuck in a kind of limbo where no one wants to make certain key decisions, perhaps because then they’ll be blamed for the failure
- Moving the goal posts – e.g., we never really intended to include reports in the system. And one week of testing will be fine; we don’t need those two weeks we planned on.
An adroit CIO, not to mention any good project leader, will of course be aware of all of these syndromes, and know when to probe, when to regroup, when to shuffle the deck. But sometimes it’s the leaders themselves who succumb to those behaviors. And for people on the project periphery, such as other C-level executives? It’s hard to know whom to listen to on the team, and it’s definitely dangerous to depend on overheard hallway conversations: Mary in the PMO may be a perennial optimist, Joe over in the network group a chronic Eeyore who thinks nothing will ever work, and so on. There are few, if any, reliable harbingers of looming disaster.
Wouldn’t it be great if there were some kind of codified, external measurement/evaluation tool that could methodically identify the kinds of disconnects that even well-led projects can fall prey to? One that could pinpoint where the true risk areas are as the project evolves, and help people take targeted action ahead of time to address those problem spots?
That’s why I got so excited in a recent conversation with well-known IT failure expert Michael Krigsman, CEO of Asuret, a company that sells “technology-backed services”. He gave me a look at their forthcoming product, an impressively slick, well-engineered tool that in my view promises to provide exactly that kind of benefit: identifying where and why a project might fail in terms of some of those people/best practices aspects, before it actually does.
In a nutshell, Asuret facilitates a cross-sectional analysis of project participants and stakeholders as the project proceeds. By aggregating the answers to its carefully crafted questions and constructing a number of easy-overview summary charts, the tool then displays astonishingly insightful visual breakdowns that let you pinpoint major disconnects, such as between stakeholder groups and IT, or between actual project-specific and industry-best practices.
Let’s look at an example of what it shows you. By mapping aggregated analysis results onto charted dimensions of importance and vulnerability, and slicing these charts by department, you can see at a glance in the chart below that there’s a disconnect: e.g., that executives think that the business case for the project has high vulnerability, while the IT participants view it as having low vulnerability. Early warning sign! And certainly better (more methodical, more aggregated) than relying solely on what you’ve heard Joe grumbling about in the lunchroom.
In the example, the disconnect looms large: look at the darker circle (representing the participants’ responses to questions regarding the project’s business case) and its different location on the two grids shown below:Figure 1
This all sounds simple in this brief description, perhaps, but taken as a whole, Asuret’s methodical implementation and targeted, useful results are nothing short of groundbreaking. Perhaps other companies provide a similar product, but I don’t know of any. And frankly, I can’t imagine a better-designed or more perfectly suited product as Asuret to address the issues raised in this post. I’m really looking forward to hearing more as they deploy and hone their product, because I can think of any number of large projects I’ve been on where this approach would have been revealing and useful.
It’s maybe not the ever-hoped-for holy grail, but it promises to be a small piece of it: an extension of our ability to see things before they happen. If Will Rogers had been an IT guy, I think he would have been excited too.