Print
Close

Analyzing the Tools of Software Engineering

Capers Jones

April 6, 1999

 

 

Abstract
From data collected by Software Productivity Research during software assessment and benchmark studies, there are major differences in the patterns of software tool usage between "leading" and "lagging" enterprises. Leading enterprises are defined as those in the top quartile of the companies evaluated by Software Productivity Research in terms of software productivity, schedule adherence, and quality results. Lagging enterprises are those in lower quartile.

 

The most significant differences noted between laggards and leaders are in the areas of project management tools, quality assurance tools, and testing tools. Leaders tend to exceed laggards by a ratio of about 15 to 1 in the volumes of tools associated with project management and quality control. The function point metric is proving to be a useful analytical tool for evaluating the capacities of software tool suites.

Capers Jones, Chief Scientist
Artemis Management Systems, Inc.
Software Productivity Research, Inc. (an Artemis company)
http://www.spr.com

Copyright p 1997 - 1999 by Capers Jones, SPR, Inc. All Rights Reserved.

 

INTRODUCTION

There are hundreds or even thousands of commercial tools available for software development, software project management, maintenance, testing, quality control and other key activities associated with software projects. There are also hundreds of proprietary, internal tools which companies build for their own use but not for sale to others. For a fairly complete listing of the software engineering and project management tools marketed in the United States, refer to the annual tool catalogs edited by Alan Howard of ACR, Inc (Howard 1999).

 

Many commercial software tool vendors make advertising claims about the power of their tools in terms of increasing software development productivity, quality, or shortening schedules. Many of these claims are not supported by empirical data, and most appear to be exaggerated in greater or lesser degree. Indeed, the exaggerations by tool vendors did much to discredit the value of Computer Aided Software Engineering (CASE) which tended to promise more than it performed.

 

Considering the importance of software to business and industry, it is surprising that the topic of software tool usage has been under-reported in the software literature. Indeed, since about 1990 much of the software engineering literature has been devoted to the subject of "software process improvement" and tools have been regarded as a minor background issue, although previous versions of the analysis were published by the author in two prior books, Assessment and Control of Software Risks (Jones 1994) and Patterns of Software Systems Failure and Success (Jones 1995).

 

The author’s company, Software Productivity Research, performs both qualitative assessments and quantitative benchmark studies for clients on a daily basis. A part of our analysis is collecting data on the numbers and kinds of tools utilized for software development, project management, and other related activities. The master catalogs of tools in our CHECKPOINT- and KnowledgePlan- data collection instruments include more than 500 kinds of tools.

 

In addition, we also record data on software productivity, quality, schedules, and other quantitative aspects of software performance as well as qualitative data on the methods and processes utilized. As of 1999 the total number of software projects in our knowledge base is rapidly pushing past 8000, and the total number of client organizations from which we have collected data is approaching 600 companies and some government agencies. We gather new data at a rate of perhaps 100 projects each month, so the volume of data we collect continues to expand.

 

In analyzing this data, we perform multiple regression studies on the factors that influence the outcomes of software projects. Although the software development process is indeed a key issue, tools also exert a major impact. This report discusses some of the differences in the patterns of tool usage noted between "lagging" organizations and "leading" organizations. In terms of tool usage, the most significant differences between laggards and leaders are in the domains of project management tools and quality control tools.

 

PERFORMANCE OF LAGGING, AVERAGE, AND LEADING PROJECTS

 

Before discussing the impact of tools, it is useful to provide some background data on the results which we associate with lagging, average, and leading software projects. In our measurement studies we use the function point metric for data normalization, and this report assumes version 4 of the function point counting rules published by the International Function Point Users Group (IFPUG 1995). Function points have substantially replaced the older "lines of code" (LOC) metric for all quantitative benchmark studies, since the LOC metric is not useful for large-scale studies involving multiple programming languages.

 

In our quantitative benchmark studies, as might be expected, the majority of projects are "average" in terms of productivity, quality, and schedule results. What this report concentrates on are the extreme ends of the data we collect: the outlying projects that are either much better than average or much worse than average. There are more insights to be gained by analysis of the far ends of the spectrum than by examining the projects that cluster around the center.

 

Let us consider what it means for a software project to be considered "average" or "leading" or "lagging" in quantitative terms. Although many attributes can be included, in this short report only six key factors will be discussed:

 

     

  1. The length of software development schedules 
  2. Productivity rates expressed in function points per staff month 
  3. Defect potentials expressed in function points 
  4. Defect removal efficiency levels 
  5. Delivered defect levels 
  6. Rank on the capability maturity model (CMM) of the Software Engineering Institute

     

 

In general, the set of leading companies are better in all of these factors than either the average or lagging groups. That is, their schedules are shorter, their quality levels are better, and they place higher on the SEI CMM.

 

Average Software Projects
Because schedules vary with project size, the development schedules of average software projects can be approximated by raising the function point total of the project to the 0.4 power. This calculation yields the approximate number of calendar months for development between start of requirements and delivery to clients. Thus for a project of 1000 function points, raising that size to the 0.4 power yields a development schedule from start of requirements until deployment that would be roughly 15.8 calendar months.

 

The defect potential or number of possible bugs that might be found for average projects totals to about 5 bugs per function point. This is the sum of bugs or defects found in five deliverable artifacts: requirements, design, source code, user documents, and "bad fixes" or secondary defects introduced while fixing other defects. The cumulative defect removal efficiency before delivery to clients is about 85%, so the number of bugs still latent at the time of delivery is about 0.75 bugs per function point.

 

Software development productivity rates vary with the size and nature of the application, but are typically in the range of 6 to 12 function points per staff month for projects in the average zone.

 

Although the capability maturity model (CMM) published by the Software Engineering Institute (SEI) is based on qualitative rather than quantitative results, the data shown here for average projects is representative of projects that are at Level 1 of the CMM, but not far from Level 2.

 

Leading Software Projects
Software projects in the upper quartile of our data base have shorter schedules, higher quality levels, and higher productivity rates simultaneously. This is not surprising, because the costs, effort, and time to find software defects is usually the largest cost driver and the most significant barrier to rapid development schedules.

 

To approximate the development schedule for projects in the upper quartile, raise the function point total of the application to the 0.35 power to generate the number of calendar months from requirements to deployment. For a sample project of 1000 function points in size, this calculation yields a result of about 11.2 calendar months from start of requirements until deployment.

 

The defect potential or number of possible bugs that might be found for leading projects is well below average, and runs to less than about 3 bugs per function point. The cumulative defect removal efficiency before delivery to clients is about 95%, so the number of bugs still latent at the time of delivery is about 0.15 bugs per function point.

 

The reduced levels of defect potentials stem from better methods of defect prevention, while the elevated rates of defect removal efficiency are always due to the utilization of formal design reviews and code inspections. Testing alone is insufficient to achieve defect removal rates higher than about 90% so all of the top-ranked quality organizations utilize inspections also.

 

Here too the productivity rates vary with the size and nature of the application, but are typically in the range of 15 to 50 function points per staff month for projects in the upper quartile. (The maximum rate can exceed 150 function points per staff month.)

 

In terms of the capability maturity model (CMM) published by the Software Engineering Institute (SEI) the data for the upper quartile shown is representative of projects that are at well into Level 3 of the CMM, or higher.

 

Lagging Software Projects
Software projects in the lower quartile of our data base are troublesome and there is also a known bias in our data. Many projects that would be in the lower quartile if the project went all the way to completion are cancelled, and hence not studied in any depth. Therefore the projects discussed here are those which were completed, but which were well below average in results.

 

The effect of this situation is to make the lagging projects, as bad as they are, look somewhat better than would be the case if all of the cancelled projects were included in the same set. Unfortunately in our consulting work we are seldom asked to analyze projects that have been terminated due to excessive cost and schedule overruns. We are often aware of these projects, but our clients do not ask to have the projects included in the assessment and benchmark studies that they commission us to perform.

 

To approximate the development schedule for projects in the lower quartile, raise the function point total of the application to the 0.45 power to generate the number of calendar months from requirements to deployment. For a sample project of 1000 function points in size, this calculation yields a result of about 22.4 calendar months.

 

The defect potential or number of possible bugs that are found for lagging projects is well above average, and runs to more than about 7 bugs per function point. The cumulative defect removal efficiency before delivery to clients is only about 75%, so the number of bugs still latent at the time of delivery is an alarming 1.75 bugs per function point. Needless to say, lagging projects have severe quality problems, unhappy users, and horrendous maintenance expenses.

 

As will be discussed later, the lagging projects usually have no quality assurance tools or software quality assurance teams, and may also be careless and perfunctory in testing as well.

 

For laggards too the productivity rates vary with the size and nature of the application, but are typically in the range of 1 to 5 function points per staff month, although some projects in the lower quartile achieve only a fraction of a function point per staff month. (The minimum rate we’ve measured is 0.13 function points per staff month.)

 

In terms of the capability maturity model (CMM) published by the Software Engineering Institute (SEI) the data for the lower quartile is representative of projects that are at well back at the rear of Level 1 of the CMM.

 

NOTE: For additional information on U.S. national averages and ranges for software schedules, productivity, and quality levels refer to the author’s book Applied Software Measurement (Jones 1996).

 

A TAXONOMY OF SOFTWARE TOOL CLASSES

 

This report is concerned with fairly specialized tools which support software projects in specific ways. There are of course scores of general-purpose tools used by millions of knowledge workers such as word processors, spreadsheets, data bases, and the like. These general-purpose tools are important, but are not covered in the following report in depth because they are not really aimed at the unique needs of software projects.

 

Because tool usage is under reported in the software literature there is no general taxonomy for discussing the full range of tools which can be applied to software projects or are deployed within software organizations. In this report, the author has developed the following taxonomy for discussing software-related tools:

 

Project Management Tools
These are tools aimed at the software management community. These tools are often concerned with predicting the costs, schedules, and quality levels prior to development of software projects. The set of management tools also includes tools for measurement and tracking, budgeting, and other managerial activities that are performed while software projects are underway.

 

Note that there are a number of tools available for personnel functions such as appraisals. However, these are generic tools and not aimed specifically at project management or control of software projects themselves and hence are not dealt with in this report.

 

Software Engineering Tools
The set of software engineering tools are those used by programmers or software engineering personnel. There are many tools in this family, and they cover a variety of activities commencing with requirements analysis and proceeding through design, coding, change control, and testing.

 

Examples of the tools in the software engineering family include design tools, compilers, assemblers, and the gamut of features now available under the term "programming support environment."

 

Numerically there are more vendors and more kinds of tools within the software engineering family than any of the other families of tools discussed in this report. The software engineering tools family has several hundred vendors and several thousand projects in the United States alone, and similar numbers in Western Europe. Significant numbers of tools and tool vendors also occur in the Pacific Rim and South America.

 

Software Maintenance Engineering Tools
The tools in this family are aimed at stretching out the lives of aging legacy software applications. These tools are concerned with topics such as reverse engineering, code restructuring, defect tracking, reengineering, and other activities that center on existing applications.

 

Although the family of maintenance tools is increasing, it has been an interesting phenomenon that maintenance tools have never been as plentiful nor as well marketed as software development tools.

 

The impact of two massive maintenance problems, the year 2000 and the Euro-currency conversion, is triggering a burst of new maintenance tools and for perhaps the first time in software’s history the topic of maintenance is achieving a level of importance equal to new development.

 

Software Quality Assurance Tools
The tools in the software quality assurance (SQA) set are aimed at defect prediction, prevention, defect tracking, and the other "traditional" activities of SQA teams within major corporations.

 

It is an unfortunate aspect of the software industry that the family of quality-related tools was small during the formative years of the software occupation, during the 1960’s and 1970’s. In recent years the numbers of quality-related tools have been increasing fairly rapidly, although Software Quality Assurance (SQA) tools are still found primarily only in large and sophisticated corporations. Incidentally, as a class, software quality groups are often understaffed and underfunded.

 

Software Testing Tools
The family of testing tools has been expanding rapidly, and the vendors in this family have been on a wave of mergers and acquisitions. The test tool market place is expanding fairly rapidly, and new tools are being marketed at an increasing pace to meet some of the specialized testing needs of the year 2000 and Euro-currency conversion tasks that are now sweeping the software industry.

 

The test tool community is logically related to the software quality assurance community, but the two groups are not identical in their job functions nor in the tools which are often utilized, although there are of course many duplications of tools.

 

A wave of mergers and acquisitions has been sweeping through the test and quality tool domain. As a result, test and quality assurance tools are now starting to be marketed by larger corporations than was formerly the case, which may increase sales volumes. For many years, test and quality assurance tools were developed and marketed by companies that tended to be small and undercapitalized.

 

Software Documentation Tools
Every software project requires some kind of documentation support, in terms of users guides, reference manuals, HELP text, and other printed matter. The more sophisticated software projects have a substantial volume of graphics and illustrations too, and may also use hypertext links to ease transitions from topic to topic.

 

The topic of documentation tools is undergoing profound changes under the impact of the word wide web and the Internet. Also the topic of work-flow management and newer technologies such as HTML, web authoring tools, and hypertext links are beginning to expand the world of documentation from "words on paper" to a rich multi-media experience where on-line information may finally achieve the long-delayed prominence which has been talked about for almost 50 years.

 

TOOL USAGE ON AVERAGE, LAGGING, AND LEADING PROJECTS

 

This section of the report discusses the ranges and variations of tools noted on lagging, average, and leading projects. Two primary kinds of information are reported in this section:

 

     

  1. Variances in the numbers of tools used in lagging and leading projects 
  2. Variances in the function point totals of the lagging and leading tool suites

     

 

The counts of the numbers of tools is simply based on assessment and benchmark results and our interviews with project personnel. Although projects vary, of course, deriving the counts of tools is reasonably easy to perform.

 

The sizes of the tools expressed in function points are more difficult to arrive at, and have a larger margin of error. For some kinds of tools such as cost estimating tools actual sizes are known in both function point and lines of code form because the author’s company builds such tools.

 

For many tools, however, the size data is only approximate and is derived either from "backfiring" which is conversion from lines of code to function points; or from analogy with tools of known sizes. The size ranges for tools in this report are interesting, but not particularly accurate. The purpose of including the function point size data is to examine the utilization of tool features in lagging and leading projects.

 

In general, the lagging projects depend to a surprising degree on manual methods and have rather sparse tool usage in every category except software engineering, where there are comparatively small differences between the laggards and the leaders.

 

Project Management Tools on Lagging and Leading Projects
The differences in project management tool usage is both significant and striking. The lagging projects typically utilize only three general kinds of project management tools, while the leading projects utilize 18. Indeed, the project management tool family is one of the key differentiating factors between lagging and leading projects.

 

In general, the managers on the lagging projects typically use manual methods for estimating project outcomes, although quite a few may use schedule planning tools such as Microsoft Project. However, project managers on lagging projects tend to be less experienced in the use of planning tools and to utilize fewer of the available features. The sparseness of project management tools does much to explain why so many lagging software projects tend to run late, to exceed their budgets, or to behave in more or less unpredictable fashions. Table 1 shows project management tool ranges:

 

Table 1: Numbers and Size Ranges of Project Management Tools

 

(Size data expressed in terms of function point metrics)

 

 

 

 

 

 

Project Management

Lagging

Average

Leading

 

Project planning

1,000

1,250

3,000

 

Project cost estimating

 

 

3,000

 

Statistical analysis

 

 

3,000

 

Methodology management

 

750

3,000

 

Year 2000 analysis

 

 

2,000

 

Quality estimation

 

 

2,000

 

Assessment support

 

500

2,000

 

Project measurement

 

 

1,750

 

Portfolio analysis

 

 

1,500

 

Risk analysis

 

 

1,500

 

Resource tracking

300

750

1,500

 

Value analysis

 

350

1,250

 

Cost variance reporting

 

500

1,000

 

Personnel support

500

500

750

 

Milestone tracking

 

250

750

 

Budget support

 

250

750

 

Function point analysis

 

250

750

 

Backfiring: LOC to FP

 

 

750

 

Function point subtotal

1,800

5,350

30,250

 

Number of tools

3

10

18

 

 

 

 

 

 

 

By contrast, the very significant use of project management tools on the leading projects results in one overwhelming advantage: "No surprises." The number of on-time projects in the leading set is far greater than in the lagging set, and all measurement attributes (quality, schedules, productivity, etc.) are also significantly better.

 

Differences in the software project management domain are among the most striking in terms of the huge differential of tool usage between the laggards and leaders. Variances in the number of tools deployed is about 6 to 1 between the leaders and the laggards, while variances in the tool capacities expressed in function points has a ratio of almost 17 to 1 between the leaders and the laggards. These differences are far greater than almost any other category of tool.

 

Software Engineering Tools on Lagging and Leading Projects
The set of software engineering tools deployed has the smallest variance of any tool category between the leaders and the laggard classes. In general, unless a critical mass of software engineering tools are deployed software can’t be developed at all so the basic needs of the software community have built up a fairly stable pattern of software engineering tool usage.

 

Table 2 shows the numbers and volumes of software engineering tools deployed, but as can easily be seen the variations are surprisingly small between the lagging, average, and leading categories.

 

Table 2: Numbers and Size Ranges of Software Engineering Tools

(Size data expressed in terms of function point metrics)

 

 

 

 

 

 

Software Engineering

Lagging

Average

Leading

 

Compilers

3,500

3,500

3,500

 

Program generators

 

3,500

3,500

 

Design tools

1,000

1,500

3,000

 

Code editors

2,500

2,500

2,500

 

GUI design tools

1,500

1,500

2,500

 

Assemblers

2,000

2,000

2,000

 

Configuration control

500

750

2,000

 

Data modeling

750

750

1,500

 

Debugging tools

750

1,000

1,250

 

Data base design

750

750

1,000

 

Capture/playback

500

500

750

 

Library browsers

500

500

750

 

Function point subtotal

14,250

18,750

24,250

 

Number of tools

11

12

12

 

 

 

 

 

 

 

There are some differences, of course, but the differences are very minor compared to the much more striking differences in the project management and quality assurance categories.

 

The overall features and sizes of software engineering tools have been increasing as tool vendors add more capabilities. About 10 years ago when the author first started applying function point metrics to software tools, no software engineering tools were larger than 1000 function points in size, and the total volume of function points even among the leading set was only about 10,000 function points. A case can be made that the power or features of software engineering tools have tripled over the last 10 years.

 

As can be seen from table 2, although there are some minor differences in the tool capacities between the leaders and the laggards, the differences in the number of software engineering tools deployed is almost nonexistent.

 

A very common pattern noted among assessment and benchmark studies is for the software development teams and tool suites to be fairly strong, but the project management and quality tool suites to be fairly week. This pattern is often responsible for major software disasters, such as the long delay in opening up the Denver Airport because the luggage-handling software was too buggy to be put into full production.

 

Software Maintenance Engineering Tools on Lagging and Leading Projects
When the focus changes from development to maintenance (defined here as the combination of fixing bugs and making minor functional enhancements) the tool differentials between the leaders and the laggards are much more significant than for development software engineering.

 

For many years, software maintenance has been severely understated in the software literature, and severely underequipped in the tool markets. Starting about 10 years ago the numbers of software personnel working on aging legacy applications began to approach and in some cases exceed the numbers of personnel working on brand new applications. This phenomenon brought about a useful but belated expansion in software maintenance tool suites. Table 3 shows the variations in software maintenance engineering tools:

 

 

Table 3: Numbers and Size Ranges of Maintenance Engineering Tools

(Size data expressed in terms of function point metrics)

 

 

 

 

 

 

Maintenance Engineering

Lagging

Average

Leading

 

Reverse engineering

 

1,000

3,000

 

Reengineering

 

1,250

3,000

 

Code restructuring

 

 

1,500

 

Configuration control

500

1,000

2,000

 

Year 2000 test support

 

500

1,500

 

Customer support

 

750

1,250

 

Debugging tools

750

750

1,250

 

Defect tracking

500

750

1,000

 

Complexity analysis

 

 

1,000

 

Year 2000 search engines

 

500

1,000

 

Function point subtotal

1,750

6,500

16,500

 

Number of tools

3

8

10

 

 

 

 

 

 

 

As the overall personnel balance began to shift from new development to maintenance, software tool vendors began to wake up to the fact that a potential market was not being tapped to the fullest degree possible.

 

The differences between the leaders and the laggards in the maintenance domain are fairly striking and include about a 3 to 1 differential in numbers of tools deployed, and a 9.4 to 1 differential in the function point volumes of tools between the leaders and the laggards.

 

The emergence of two of the most massive business problems in human history is having a severe impact on maintenance tools, and on maintenance personnel as well. The on-rushing year 2000 software problem and the ill-timed Euro-currency conversion work are both triggering major and recent increases in software maintenance tools that can deal with these specialized issues. If this analysis is repeated circa 2001 it is likely that the "leaders" will have almost twice the maintenance tool capacities shown here, although the laggards may not change significantly. For additional information on the year 2000 and Euro-currency problems refer to the author’s book The Year 2000 Software Problem - Quantifying the Costs and Assessing the Consequences (Jones 1998).

 

Software Quality Assurance Tools on Lagging and Leading Projects
When the software quality assurance tool suites are examined, one of the most striking differences of all springs into focus. Essentially the projects and companies in the "laggard" set have no software quality assurance function at all, and hence no SQA tool suites either as can be seen in table 4.

 

 

Table 4: Numbers and Size Ranges of Software Quality Tools

 

(Size data expressed in terms of function point metrics)

 

 

 

 

 

 

Quality Assurance

Lagging

Average

Leading

 

Quality estimation

 

 

2,000

 

Data quality analysis

 

 

1,250

 

QFD support

 

 

1,000

 

TQM support

 

 

1,000

 

Inspection support

 

 

1,000

 

Reliability estimation

 

 

1,000

 

Defect tracking

 

750

1,000

 

Complexity analysis

 

500

1,000

 

Function point subtotal

0

1,250

9,250

 

Number of tools

0

2

8

 

 

 

 

 

 

 

By contrast, the leaders in terms of delivery, schedule control, and quality all have well-formed independent software quality assurance groups that are supported by powerful and growing tool suites.

 

Unfortunately, even leading companies are sometimes understaffed and underequipped with software quality assurance tools. In part, this is due to the fact that so few companies have software measurement and metrics programs in place that the significant business value of achieving high levels of software quality is often unknown to the management and executive community.

 

Several tools in the quality category are identified only by their initials, and need to have their purpose explained. The set of tools identified as "QFD support" are those which support the special graphics and data analytic methods of the "quality function deployment" methodology.

 

The set of tools identified as "TQM support" are those which support the reporting and data collection criteria of the "total quality management" methodology.

 

The other tools associated with the leaders are the tools of the trade of the software quality community: tools for tracking defects, tools to support design and code inspections, quality estimation tools, reliability modeling tools, and complexity analysis tools.

 

Complexity analysis tools are fairly common, but their usage is much more frequent among the set of leading projects than among either average or lagging projects. Complexity analysis is a good starting point prior to beginning complex maintenance work such as year 2000 repairs or Euro-currency updates.

 

Another good precursor tool class prior to starting year 2000 repairs or Euro-currency work would be the use of code restructuring tools, although these tools are only available for common languages such as COBOL, FORTRAN, and C and not for the less common languages used for legacy applications such as JOVIAL, CMS2, CHILL, or CORAL.

 

Unfortunately, since the laggards tend to have no quality assurance tools at all, the use of ratios is not valid in this situation. In one sense, it can be said that the leading projects have infinitely more software quality tools than the laggards, but this is simply because the lagging set often have zero quality tools deployed.

 

Software Testing Tools on Lagging and Leading Projects
Although there are significant differences between the leading and lagging projects in terms of testing tools, even the laggards test their software and hence have some testing tools available.

 

Note that there is some overlap in the tools used for testing and the tools used for quality assurance. For example, both test teams and software quality assurance teams may both utilize complexity analysis tools.

 

Incidentally, testing by itself has never been fully sufficient to achieve defect removal efficiency levels in the high 90% range. All of the "best in class" software quality organizations use a synergistic combination of design reviews, code inspections, and multiple testing stages. This combined approach can lead to defect removal efficiency levels that may top 99% in best case situations, and always top the current U.S. average of 85% or so.

 

Table 5: Numbers and Size Ranges of Software Testing Tools

 

(Size data expressed in terms of function point metrics)

 

 

 

 

 

 

Testing

Lagging

Average

Leading

 

Test case generation

 

 

1,500

 

Complexity analysis

 

500

1,500

 

Year 2000 test support

 

500

1,500

 

Data quality analysis

 

 

1,250

 

Defect tracking

500

750

1,000

 

Test library control

250

750

1,000

 

Performance monitors

 

750

1,000

 

Capture/playback

 

500

750

 

Test path coverage

100

200

350

 

Test case execution

 

200

350

 

Function point subtotal

850

4,150

10,200

 

Number of tools

3

8

10

 

 

 

 

 

 

 

The differences in numbers of test tools deployed ranges by about 3 to 1 between the leading and lagging projects. However, the tool capacities vary even more widely, and the range of tool volumes is roughly 12 to 1 between the leaders and the laggards.

 

This is one of the more interesting differentials because all software projects are tested and yet there are still major variations in numbers of test tools used and test tool capacities. The leaders tend to employ full-time and well-equipped testing specialists while the laggards tend to assign testing to development personnel, who are often poorly trained and poorly equipped for this important activity.

 

For a more extensive discussion of the differences between leaders and laggards in terms of both quality assurance and testing refer to the author’s book Software Quality - Analysis and Guidelines for Success (Jones 1996).

 

This book also discusses variations in the numbers and kinds of testing activities performed, and also variations in the use of defect tracking tools, use of formal design and code inspections, quality estimation, quality measurements, and many other differentiating factors.

 

Unfortunately, none of the major vendors of test tools and only a few of the vendors of quality assurance tools have any empirical data on software quality or provide information on defect removal efficiency levels. The subject of how many bugs can actually be found by various kinds of review, inspection, and test is the most important single topic in the test and quality domain, but the only published data on defect removal tends to come from software measurement and benchmark companies rather than from test tool and quality tool companies.

 

Software Documentation Tools on Lagging and Leading Projects
Almost all software projects require documentation, but very few are documented extremely well. The set of documentation tools is undergoing profound transformation as on-line publishing and the world wide web begins to supplant conventional paper documents.

 

Note that some of these tools included here in the documentation section are also used for requirements, specifications, plans, and other documents throughout the software development cycle. For example, almost every knowledge worker today makes use of "word processing" tools so these tools are not restricted only to the software documentation domain.

 

As on-line publishing grows, this category is interesting in that the "average" and "leading" categories are fairly close together in terms of document tool usage. However the laggards are still quite far behind in terms of both numbers of tools and overall capacities deployed.

 

 

Table 6: Numbers and Size Ranges of Software Documentation Tools

(Size data expressed in terms of function point metrics)

 

 

 

 

 

 

Documentation Support

Lagging

Average

Leading

 

Word processing

3,000

3,000

3,000

 

Web publishing

 

3,000

3,000

 

Desktop publishing

2,500

2,500

2,500

 

Graphics support

500

500

2,500

 

Multimedia support

 

750

2,000

 

Grammar checking

 

 

500

 

Dictionary/thesaurus

500

500

500

 

Hypertext support

 

250

500

 

Scanning

 

 

300

 

Spell checking

200

200

200

 

Function point subtotal

6,700

10,700

15,000

 

Number of tools

5

8

10

 

 

 

 

 

 

 

As web publishing and DVD drives become more common, it is likely that conventional paper documents will gradually be supplanted by on-line documents. The advent of the "paperless office" has been predicted for years but stumbled due to the high costs of storage.

 

Now that optical storage is exploding in capacities and declining in costs, optical on-line storage is now substantially cheaper than paper storage, so the balance is beginning to shift towards on-line documentation and the associated tool suites.

 

In the documentation domain the variance between the leaders and the laggards is 2 to 1 in the number of tools deployed, and also just over 2 to 1 in the volumes of tools deployed. The differences in the documentation category are interesting, but not so wide as the differentials for project management and quality assurance tools.

 

Overall Tool Differences Between Laggards and Leaders
To summarize this analysis of software tool differentials between lagging and leading organizations, table 7 shows the overall numbers of tools noted in our assessment and benchmark studies.

 

 

Table 7: Numbers of Tools on Leading, Average, and Laggard Software Projects

 

 

 

 

 

 

Ratios: Laggards to Leaders

Lagging

Average

Leading

Ratio

 

Project Management

3

10

18

6 to 1

 

Software Engineering

11

12

12

1 to 1

 

Maintenance Engineering

3

10

10

3 to 1

 

Quality Assurance

0

2

8

NA

 

Testing

3

8

10

3 to 1

 

Documentation

5

8

10

2 to 1

 

TOTAL

25

50

68

3 to 1

 

 

 

 

 

 

 

 

As can be seen, there is roughly a 3 to 1 differential in the numbers of tools deployed on leading projects as opposed to the numbers of tools on lagging projects. The major differences are in the project management and quality assurance tools, where the leaders are very well equipped indeed and the laggards are almost exclusively manual and lack most of the effective tools for both project management and quality purposes.

 

When tool capacities are considered, the range of difference between the lagging and leading sets of tools is even more striking and the range between leaders and laggards jumps up to about a 4 to 1 ratio.

 

The use of function point totals to evaluate tool capacities is an experimental method with a high margin of error, but the results are interesting. Although not discussed in this report, the author’s long range studies over a 10 year period has found a substantial increase in the numbers of function points in all tool categories.

 

It is not completely clear if the increase in functionality is because of useful new features, or merely reflects the "bloat" which has become so common in the software world. For selected categories of tools such as compilers and programming environments, many of the new features appear to be beneficial and quite useful.

 

The added features in many project management tools such as cost estimating tools, methodology management tools, and project planning tools are also often giving valuable new capabilities which were long needed.

 

For other kinds of tools however, such as word processing, at least some of the new features are of more questionable utility and appear to have been added for marketing rather than usability purposes.

 

Table 8 illustrates the overall differences in tool capacities using function point metrics as the basis of the comparison:

 

 

Table 8: Capacities of Tools on Leading, Average, and Laggard Software Projects

(Size data expressed in terms of function point metrics)

 

 

 

 

 

 

 

 

Ratios: Laggards to Leaders

Lagging

Average

Leading

Ratio

 

Project Management

1,800

5,350

30,250

17 to 1

 

Software Engineering

14,250

18,750

24,250

2 to 1

 

Maintenance Engineering

1,750

7,500

16,500

9 to 1

 

Quality Assurance

0

1,250

9,250

NA

 

Testing

850

4,150

10,200

12 to 1

 

Documentation

6,700

10,700

15,000

2 to 1

 

TOTAL

25,350

47,700

105,450

4 to 1

 

 

 

 

 

 

 

 

Unfortunately both tables 7 and 8 are somewhat awkward in terms of ratios, since the laggards tend to have 0 tools deployed for the quality assurance domain. When tool capacities are considered, the major differences can be found in project management, maintenance engineering, testing, and quality assurance.

 

SUMMARY AND CONCLUSIONS

 

Although software tools have been rapidly increasing in terms of numbers and features, the emphasis on software process improvement in the software engineering literature has slowed down research on software tool usage.

 

Both software processes and software tools have significant roles to play in software engineering, and a better balance is needed in research studies that can demonstrate the value of both tools and process activities.

 

The use of function point metrics for exploring software tool capacities is somewhat experimental, but the results to date have been interesting and this method may well prove to be useful. Long-range analysis by the author over a 10 year period using function point analysis has indicated that software tool capacities have increased substantially, by a range of about 3 to 1. It is not yet obvious that the expansion in tool volumes has added useful features to the software engineering world, or whether the expansion simply reflects the "bloat" that has been noted in many different kinds of software applications.

 

REFERENCES AND READINGS

 

Howard, Alan (Ed.); Software Productivity Tool Catalogs; (in seven volumes); Applied Computer Research (ACR; Phoenix, AZ; 1997; 300 pages.

IFPUG Counting Practices Manual, Release 4, International Function Point Users Group, Westerville, OH; April 1995; 83 pages.

Jones, Capers; Applied Software Measurement; McGraw Hill, 2nd edition 1996; ISBN 0-07-032826-9; 618 pages.

Jones, Capers; Assessment and Control of Software Risks; Prentice Hall, 1994; ISBN 0-13-741406-4; 711 pages.

Jones, Capers; Patterns of Software System Failure and Success; International Thomson Computer Press, Boston, MA; December 1995; 250 pages; ISBN 1-850-32804-8; 292 pages.

Jones, Capers; The Year 2000 Software Problem - Quantifying the Costs and Assessing the Consequences; Addison Wesley, Reading, MA; 1998; ISBN 0-201-30964-5; 303 pages.

Jones, Capers; Software Quality – Analysis and Guidelines for Success; International Thomson Computer Press, Boston, MA; ISBN 1-85032-876-6; 1997; 492 pages.

Copyright © 2021 ProjectManagement.com All rights reserved.

The URL for this article is:
https://www.projectmanagement.com/articles/9730/Analyzing-the-Tools-of-Software-Engineering