Software Project Failures

I recently joined the discussion of an informal IT executive round table in Kansas City where the topic of software development project failure (or success) rates came up. The opening remark referenced the now famous Standish Group’s CHAOS report in 1994 where they officially put everyone on notice that the failure rate was terrible and that it was costing companies countless millions in resources. According to the report, the success rate in 1994 was 16%. It “improved” to 28% in their 2001 study and made its way up to 31% in their most recent study.  At that trajectory, the chance of success on software projects should be 50/50 within the next 5-10 years. Better, but still far from optimal. The discussion we had revolved around the reasons for this slowly improving trend. All the classic reasons were brought up followed by the typical things we could do to fix the issues, most of which everyone was working on to some degree or another. My primary position that I outlined during the discussion is detailed below.

I recently read an article on called Unraveling the Mystery of Software Development Success that made some interesting points summed up in the following graph.

The trend clearly shows that we’ve focused on the low hanging fruit which is improvement in tools and methodologies but the really hard stuff [and the ultimate goal]—effectively translating requirements into working software—is going to take effort to get right over the long run. The tools are better. There are many choices, with new products or new versions of existing products coming out almost monthly. And this is part of the problem. I work with teams that take huge productivity hits every time they adopt a new tool or the latest version of a tool. Sometimes they can justify why they need to adopt but as often as not it’s driven by a mentality of not wanting to be “left behind” where [everyone] else is using a new tool except us (a topic for another blog post). I also help teams with process improvement where approaches focused on lessons taken from lean manufacturing tend to be the most effective.  Teams adopting/”installing” the latest methodology suffer the same productivity hits as tool adopters do and they lose more credibility with each failed attempt. In contrast, their methodology should evolve based on their culture and their resources and focus on the end goal which is working software—letting that identify the process holes and systematically filling those.

Both the convergence and the gap of the two curves above can be best summed up by the notion of “Problem or Program”. Do you focus on the business problem in isolation and try and define it sufficiently to communicate it to the development team to turn into working code, or, do you try and account for inefficiencies in problem definition and management by being highly productive via tools and process. IOW, if we misunderstand and incorrectly implement a requirement, let’s do it quickly and then as quickly, correct the issue.

Rather than separately managing both problem and program, the longer term solution (search for code generation, model driven development and intentional or declarative programming) will focus on bridging the gap between the two by better integrating requirements definition with the output of working software.

People (especially business people) fear complexity and the unknown. Programmers don’t fear it per se. They attempt to ignore it and maintain their focus on the code where they’re the most comfortable. The most competitive organizations tend to embrace the difficult job of getting the high-hanging (yet juiciest) fruit located somewhere in the unknown between Problem and Program.