We continuously monitor the research on project failure rates recently to test our observations and to see if anything much is changing. Sadly, not much is changing. First, here is a short chronological summary of the principal studies:
- In 1995, the Standish Group produced The Chaos Report (Boston, MA), which based on a survey of 8300 software implementations described by 365 respondents, across a wide variety of industries and company sizes. Their findings:
- 31% were successful (on time, on budget, delivered 100% of expected value)
- 51% were “challenged” (their words) because of overruns in schedule or budget or reductions in ultimate delivered features and benefits.
- 16% were abandoned or cancelled.
- In 1997, KPMG in Canada conducted a review of 176 projects. Their findings :
- Over 60% failed to meet business/sponsor expectations
- A full 75% missed their scheduled completion dates by 30% or more, and
- Over 50% “substantially” (their words) exceeded their budgets
- In 2000, Gartner Group (Stamford, Connecticut) surveyed 1375 respondents through interviews, indicating the following results:
- 40% of all IT Projects failed to meet business requirements
- In 2000, the Aberdeen Group found:
- 90% of projects came in late; and
- 30% of projects were cancelled before delivery.
- In 2004, the British Computer Society and the Royal Academy of Engineering published a report called the Challenges of Complex IT projects, took a slightly different approach, developed and reviewed a number of case studies, and found, among other findings:
- The levels of professionalism in software engineering are generally lower than those in other branches of engineering;
- “The importance of project management is not well understood and usually under-rated and senior managers are often ill qualified to handle issues relating to complex IT projects” and
- “Risk management is critical to success in complex projects but is seldom applied effectively in the case of IT and software.”
- In a 2004 Review in the State of California:
- 117 large projects were reviewed;
- 90 of them (90/117 or 77%) had a greater than 10% budget change approved
- 65 of them (56%) were indicated as high or medium risk.
- Of the low risk projects, at least 36 of the 52 (or 69% as of the study date) had a 10% or greater change in at least one of schedule, budget or scope.
- In 2006, the Federal Government of Canada’s Auditor General published a chapter in the Annual Report, reviewing seven large IT projects, and the following summary points emerged:
- Only two of the seven projects met all the criteria for well-managed projects
- Project governance and project management varied widely from project to project, ranging from good to seriously flawed.
- Five of the seven projects were allowed to proceed with a business case that was “incomplete or out-of-date or contained information that could not be supported” – which leads us right back to one of our favourite topics – Launch Conditions.
- In 2008, Stratmor (a consulting and investment advisory firm in the US Mortgage industry) surveyed a sample of executives in US mortgage businesses and found:
- 11% of projects were “successful” (on-time, on-budget, with full expected value)
- 78% of projects were “challenged” (i.e. late, over-budget, or reduced benefits, or a combination of all three)
- 11% of projects were cancelled or abandoned
- This particular study is interesting for two reasons: 1) in an abundance of honesty, they actually had to redefine success to have ANY successful projects (which was reasonable in the context), and 2) it is indicative of the kinds of results one sees in industry specific studies.
All of these studies are available on the Internet, and widely cited. Some of these various studies have been attacked, particularly the Standish Group Chaos Report which kicked off the party. However, all of the major studies indicate a common picture. Interviews with Executives reinforce it. The exact statistics depend mostly on the exact questions asked, somewhat subjective responses (to questions like “were users satisfied?”) and definitions of challenged. Information available from PMI supports these findings. The various project Risk Management communities identify more sources that provide similar findings. There are lots of other studies, some focusing on industry sectors, some on particular kinds of failures, and many articles and publications poring over the bones of dead projects. Writ large, the picture that emerges, from study after study, begins to form into a coherent whole.
What do we draw from all these crummy statistics?
Well, first off, projects remain hard. Larger projects are harder, but as a group, projects remain a challenging endeavour. The body of evidence is deep: if you want to succeed at a project you have to do more than launch it. You have to devote yourself to planning its success. You must incorporate tactics and approaches that to draw on the lessons of successful projects, and avoid the pitfalls of unsuccessful projects.
Secondly, the range of failures and their causes indicated in these studies are wide.
Thirdly, as interesting as these “failure” statistics are, the more you look, the more you realize that we’re looking at the wrong thing. What these studies are not telling us in an actionable way are two key things:
- What separates successful projects from unsuccessful ones?
- Which are the most important actions, investments, methods, tools, approaches, processes that we can focus on that will help make our projects more successful?
We’ll start to turn our minds to these questions soon.