Project Management

Software Assessments, Benchmarks, and Best Practices

Author: Capers Jones

ISBN: 0201485427

Buy this book at fatbrain.com

Do You Measure Up?
by Alan Zeichick

Nothing is more guaranteed to strike fear into the heart of a middle manager than the phrase “best practices.” When the company’s top brass says, “We need to measure your department’s performance against best-practices metrics,” or “We are implementing new policies to ensure that our software-development teams are using the industry’s best practices,” that’s often perceived (right or wrong) as a message that the middle manager does not know how to do his job. Even worse, outside consultants—who don’t know the business, don’t know the unique challenges that the manager faces—soon will be stomping all over his project and changing things for the sake of change. 

Asking 26-year-old outside consultants to create broad-sweeping best-practices policies doesn’t always work. But sometimes best practices really are the best practices, because they’re based on empirical evidence. That’s the case with Capers Jones’ latest work, “Software Assessments, Benchmarks, and Best Practices,” in which he provides his own list of best practices (and worst practices), statistically correlated through his company’s extensive research into why some software-development projects succeed, and why others fail by being cancelled before completion, running drastically over budget, being very late or not delivering on the agreed-upon requirements.

“Software Assessments” may be Jones’ most important book yet. He starts by presenting the argument that if you’re not objectively measuring your software process—both quantitatively and qualitatively—you don’t answer questions such as: Is our productivity better or worse than our competition? What can we do to improve lagging areas? How big an investment should we make in improving productivity and what will the ROI be? Where should we invest the money? Should we outsource development, and if so, what should be outsourced and how can we measure the success of that program? 

Jones’ company, Software Productivity Research (SPR), is one of the leading consulting firms in the area of software process assessment and benchmarking. SPR, along with other organizations, such as Software Engineering Institute, Howard Rubin Associates and the ISO, provide solid methodologies and services to help companies gather those exact metrics.

But “Software Assessments” isn’t a plug for SPR’s services; Jones has to provide this background about how testing is done in order to present the results. But that’s a lot of material to slog through, like countless pages about why measuring productivity based on lines of code is bad (in some cases, Jones calls the use of LOC measurements “professional malpractice”), why the correct basis for measurements should be function points, plus the history of function-point analysis back to Ancient Carthage. Perhaps I exaggerate…but not much.

My eyes lit up while reading Chapter 4; that’s where “Software Assessments” begins to fly. Here, Jones presents 36 factors that SPR uses to perform its benchmark studies; the other firms in the field have a similar methodology, although details vary. Those 36 factors are broken down into six separate groups: classification of the software project, project-specific factors, technology, social factors, ergonomics and international concerns. 

Jones walks through these areas, pulling out the impact that these factors generally have on project success. For example, the reuse of high-quality test material, source code, documents, architectures and so on can have a 350 percent positive impact on a project’s success, and that providing at least 10 days of training per developer or manager annually can have an 8 percent impact. But the reuse of poor-quality modules can have a 300 percent negative impact—and management inexperience has a 90 percent negative impact. Ouch!

The remainder of the book is organized around Jones’ six software project types: systems software (used to control physical devices, such as operating systems and embedded systems); commercial software (for sale externally); information systems (built for in-house use); outsourced software; military software; and end-user software (written by the intended user). One major section of the book is devoted to each of these different project types; there’s a great deal of repetition, as each of those chapters is designed to stand alone.

Take the chapter on management information systems development processes. Jones observes that compared with other software-development project types, MIS producers are among the worst when it comes to the use of metrics, quality control, test planning and test-case development, and milestone and cost tracking. As you might expect, this would have a disproportionate impact on larger projects, and indeed Jones reports that while 66 percent of projects using 1,000 function points (such as an individual application) were completed early or on time, only 

26 percent of programs using 10,000 function points (such a payroll system) met that goal—and that fully 39 percent of projects of that size were cancelled prior to completion.

None of us should be surprised at those results. But why does this happen—and what can you (or that 26-year-old consultant) do about it? Jones provides 33 pages of suggested best practices for MIS projects—success factors he found that nearly universally applied in those organizations in which MIS development projects succeed, and which he rarely found in those organizations where failure was the norm. 
For example, a best practice is spending five to 10 days per year training project managers how to create project plans and cost estimates. Another is to use formal milestone tracking for projects larger than 1,000 function points. Jones gets pretty granular; variable names and branch instructions should have meaningful names, while a bad practice is to use “cute” names, such as names of movie stars or “Star Trek” characters. Jones’ best practices cover not only coding and project management, but even hiring and compensation.

The other chapters are equally detailed, and tailored to their domain. 
The section on outsourced software, for example, includes metrics and best practices for ensuring that the client and the contract development firm don’t sue each other.

Some management books are fluffy or trendy. Not “Software Assessments.” If your organization is serious about improving its software development processes, this book may be the best $50 you’ve ever spent. 

Reprinted with permission from 
SDTimes. Originally appeared in Issue 11, August 1, 2000.


ADVERTISEMENTS

"The rule is perfect: In all matters of opinion, our adversaries are insane."

- Mark Twain

ADVERTISEMENT

Sponsors