In high school, you may have learned that pi is equal to roughly three and change. Now that you're a PM professional, you know that PI is worth a whole lot more than that. Despite the popularity of Process Improvement these days, there hasn't been much discussion of the schedules, costs and real value of bringing your software development to the highest levelsuntil now.
Abstract The topic of software process improvement is now very popular in the United States, Europe, and the Pacific Rim. There are many local chapters of the well-known software process improvement network or SPIN. Unfortunately the popularity of a topic is not commensurate with the quantitative data that is available about a topic. In the case of process improvement, there is a severe shortage of information on the schedules, costs, and results of moving from marginal to superior performance in software development.
Project managers must be action oriented and task focused, and must build and train their project team to follow these principles. These PM fundamentals are good reminders on how to keep your project on track.
As a new year approaches, set some time aside to think about your approach to giving and volunteering. It’s time to question your commitments to see if they still make sense.
Have you ever wondered why projects at work often run late while wedding projects—with all their intricate logistics, high stakes, and emotional intensity—always come off without a hitch? What can we learn from these personal projects?
This report shows a 36 month case study derived from several of SPR's clients to illustrate four tangible aspects of software process improvement: 1) What it costs to achieve software excellence; 2) How long it will take to achieve excellence; 3) What kind of value will result from achieving software excellence; 4) What kinds of quality, schedule, and productivity levels can be achieved?
Introduction Thanks to the pioneering work of Watts Humphrey (Humphrey 1989) and the Software Engineering Institute (SEI) the topic of software process improvement has become one of the major themes of the software engineering community.
The topic of software process improvement is featured in many international conferences. It has also been the impetus for the creation of a non-profit software process improvement network or SPIN, with chapters in many major cities.
Although the qualitative aspects of software process improvement have been published in scores of journals and books, there has been comparatively little empirical data published on the investments, schedules, costs, and benefits of moving from "marginal" to "superior" in terms of software development practices, tools, and infrastructure. For an interesting view of the status of software assessments and improvements, the annual process maturity profiles published by the Software Engineering Institute summarize trends from 1995 through 1999 (SEI Process Maturity Profile, March 2000.)
This report derived from several clients of Software Productivity Research (SPR) discusses the observed rates at which companies have been able to make tangible improvements in their software processes. Because most of SPR's clients are major corporations in the Fortune 500 class, this report concentrates on the results of process improvement in large organizations that have at least 1,000 software professionals employed.
While many of the same principles are valid for small companies or even for government agencies, the timing and investments required can be significantly different from those discussed in this report. In general, smaller organizations can move more rapidly and do not require the same level of investment as major corporations. See the same author's report "Becoming Best in Class" for additional information on small company improvement costs (Jones 1998).
The phenomenon of higher costs for larger companies is true for many other topics beside process improvement. For example finding office space for a company with only 10 employees is far easier and usually much cheaper than finding office space for 1000 employees.
In general, technology transfer is slower in large corporations than in small companies. Also, large organizations tend to build up a bureaucratic structure with many levels of approval needed before doing anything new and different. All of these factors tend to slow down the progress of wide-spread undertakings such as software process improvement programs.
On the other hand, large corporations often have a process improvement infrastructure and can afford to create special software "centers of excellence" or research laboratories, which smaller organizations cannot afford.
A Process Improvement Case Study In order to give a context to the speed and value of software process improvements, it is convenient to think in terms of real organizations. In this report we will synthesize the results from several of our clients, and set up a generic but typical organization that is interested in pursuing a course of software process improvement.
Let us assume that the organization illustrated in this case study is one of a score of software development laboratories for a major telecommunications manufacturer. The total software employment totals to 1000 personnel, and includes software engineers, project managers, quality assurance, technical writers, testing specialists, and perhaps 25 other specialized and general occupations.
To simplify the case study example, we can assume that half of the software population is associated with new development work, and the other half is devoted to maintenance and enhancement of legacy applications: 500 personnel in each wing. This is a simplifying assumption and the ratio of development to maintenance personnel can vary widely from company to company and lab to lab.
Assume that in 1997 the lab director was charged by corporate management to improve the lab's on-time performance of software schedules, and to improve software quality and customer support as well.
The Assessment and Baseline Analysis In order to fulfill the corporate mandate of improving software performance, the lab director commissioned an external process assessment and a quantitative benchmark study by an outside consulting organization. As part of the final report, the consulting group presented the quantitative results shown in table 1, which is based on the kinds of data produced by Software Productivity Research (SPR). However, in order to understand table 1, several terms need to be introduced and defined.
A "software process assessment" is a qualitative study of development and maintenance practices. Several varieties of assessment are performed, although they share a nucleus of common features.
Assessments performed using the approach of Software Productivity Research (SPR) compare organizations against development and maintenance practices noted by other companies within the same industry, and attempt to determine whether the client's practices are equivalent, better, or worse than normal industry practices. Results are statistically aggregated and the client is evaluated using a five-point scale running from "excellent" to "poor."
Assessments performed using the approach of the Software Engineering Institute (SEI) compare organizations against a set of specific criteria and are aimed at determining the "capability maturity level" of the organization on the well-known five-point maturity scale.
Both SPR and SEI express their results using five-point scales but the significance runs in the opposite direction. The SPR scale was first published in 1986 in Capers Jones' book Programming Productivity (Jones 1986) and hence is several years older than the SEI scale:
SPR Excellence Scale
Meaning
Frequency of Occurrence
1 = Excellent
State of the art
2.0%
2 = Good
Superior to most companies
18.0%
3 = Average
Normal in most factors
56.0%
4 = Poor
Deficient in some factors
20.0%
5 = Very Poor
Deficient in most factors
4.0%
The SEI maturity level scale was first published by Watts Humphrey in 1989 in his well-known book Managing the Software Process (Humphrey 1989):
SEI Maturity Level
Meaning
Frequency of Occurrence
1 = Initial
Chaotic
75.0%
2 = Repeatable
Marginal
15.0%
3 = Defined
Adequate
8.0%
4 = Managed
Good to excellent
1.5%
5 = Optimizing
State of the art
0.5%
Simply inverting the SPR excellence scale or the SEI maturity scale is not sufficient to convert the scores from one to another. This is because the SEI scale expresses its results in absolute form, while the SPR scale expresses its results in relative form. Large collections of SPR data from an industry typically approximate a bell-shaped curve. A collection of SEI capability maturity data is skewed toward the Initial or chaotic end of the spectrum. However, it is possible to convert data from the SPR scale to the equivalent SEI scale by a combination of inversion and compression of the SPR results.
The term "software baseline" refers to a collection of quantitative data on productivity, quality, schedules, costs, or other tangible information gathered during a specific time period. The purpose of the software baseline is to provide a tangible starting point for software process improvement work.
A "softwarebenchmark" refers to a collection of quantitative data on productivity, quality, schedules, costs or other tangible information which is then compared against similar data points from competitors or from other companies within the same industry.
There is little or no difference in the actual data collected for either a baseline or a benchmark analysis. The only difference is that the baseline is used for comparing rates of progress against your own initial conditions, while the benchmark is used for comparing your performance against similar organizations. Indeed, the same data can be used for both baseline and benchmark purposes.
The phrase "defect potential" refers to the sum total of bugs or errors that are likely to be found in five major artifacts: requirements, design, source code, user documentation, and a special category called "bad fixes" or the number of defect repairs which themselves contain an error.
The phrase "defect removal efficiency" refers to the percentage of defects actually eliminated prior to delivering the software to clients. Current U.S. averages are about 85% so in this case the client is approximately average.
Measurement of defect removal efficiency is a characteristic of every "best in class" company. Defect removal efficiency is one of the most powerful software metrics, since high values of > 95% are critical for achieving other key attributes such as schedule reductions, effective software reuse programs, and excellence in terms of customer satisfaction.
The phrase "delivered defects" refers to the numbers of latent defects still present in a software application when it is delivered to clients. The value for this topic is derived by subtracting the defect removal efficiency level from the defect potential.
The phrase "development productivity" refers to the software staff effort expended during a full development cycle which starts with requirements and commences with delivery to clients. All major activities are included; i.e. project management, software engineering, technical writing, testing, reviews and inspections, quality assurance, data base administration, etc.
Development productivity is expressed using the metric "function points per staff month" which is abbreviated in the table to "FP/Month." Function point metrics have substantially replaced the older lines of code metric in software benchmark and baseline studies. The specific form of function point used in this report is version 4 of the counting rules published by the International Function Point Users Group (IFPUG 1995).
Note that the reciprocal metric "work hours per function point" is also used in baseline and benchmark studies, but this form is not shown in the current report simply to conserve space. The two formats are mathematically equivalent and which method is used in a specific report is a matter of preference.
The phrase "maintenance productivity" refers to the effort expended after the initial release of a software application. The work includes both enhancements (adding new features) and maintenance (defect repairs).
Maintenance productivity is expressed using the metric "function points per staff month" which is abbreviated in the table to "FP/Month." Another interesting maintenance metric using function points is that of the "maintenance assignment scope" or the amount of software one person can support in the course of a year. This metric is not illustrated in the current report but averages about 750 function points per maintenance programmer in the telecommunications industry. (A full baseline report can exceed 100 pages of information expressed using quite a few metrics. This overview report only discusses a few highlights to explain the essential features of a baseline study.)
The term "schedule" refers to the elapsed time in calendar months from the start of requirements until delivery to customers. Because schedules vary with the size of the application, the table illustrates the typical average schedule for applications which are nominally 1000 function points in size.
Note that within the telecommunications software community, the term "interval" is often used as a substitute for "schedule." The reason for this is that AT&T used the word "interval" to mean development schedule prior to its break-up into many other companies, so the word continues to be used very widely among companies that were once part of the AT&T system such as Lucent or Bellcore.
Quantitative Baseline and Benchmark Results The client baseline results are shown in the top row of table 1. The two lower rows illustrate industry benchmarks from within the telecommunications industry: one for industry averages, and one for "best in class" results which are derived from the top 5% of projects within other telecommunications manufacturing enterprises.
The client baseline results are shown in the top row of table 1. The two lower rows illustrate industry benchmarks from within the telecommunications industry: one for industry averages, and one for "best in class" results which are derived from the top 5% of projects within other telecommunications manufacturing enterprises.
The comparison of a client's data against both industry averages and "best in class" results is a common practice for baseline and benchmark studies carried out by consulting groups such as Software Productivity Research (SPR) or some of the other specialized benchmark and baseline organizations such Gartner Group, Meta Group, Giga Group, Rubin Systems, the International Function Point Users Group (IFPUG), or the Australian Software Metrics Association.
Incidentally, all of the benchmark groups cited now use function point metrics as their primary reporting tool. It is possible to express productivity and quality data using the older "lines of code" metric but this metric is not as reliable for large-scale studies which include projects created in multiple programming languages such as COBOL, FORTRAN, C, SMALLTALK, C++, Eiffel, Visual Basic, etc.
Table 1 illustrates a sample of the basic quantitative results in high-level form:
Table 1: 1997 Client Baseline Quality and Productivity Compared to Similar Groups
Baseline
Defect
Defect
Delivered
Develop.
Maint.
Schedule
Potential
Removal
Defects
Productivity
Productivity
Months
per FP
Efficiency
per FP
(FP/Month)
(FP/Month)
(1000 FP)
Client
5.00
85.00%
0.750
7.00
9.00
18.00
Results
Industry
5.25
83.00%
0.893
6.50
9.00
21.00
Average
Best in
2.75
97.00%
0.083
10.50
13.00
15.00
Class
For table 1, make the assumption that a total of 25 software projects from the client organization were measured, and then compared against similar projects from other companies within the telecommunications industry. A set of 20 to 50 projects is the normal sample size for Software Productivity Research baseline studies although for very large multi-national corporations a suitable sample may exceed 100 software projects.
(Note that since the lab in this case study is stated to be one of 20 labs in the same corporation, it often happens that internal benchmark data is collected from all locations so that each lab can be compared against the others. In this situation, then as many as four or five hundred projects can be collected for the same benchmark study.)
As can be seen from table 1, the client organization is close to the industry average in overall results, and indeed slightly better than industry norms in terms of quality control and development productivity. However, the "best in class" results are significantly better than the client's results. The significant gaps between a client's results and "best in class" records within the same industry is the most common impetus for starting a process improvement program.
Qualitative Assessment Data In the course of gathering the quantitative data, the consulting organization also gathered a significant volume of qualitative data. The qualitative data is often aimed at ascertaining the "maturity level" of the organization using the well-known capability maturity model pioneered by Watts Humphrey and published more recently by the Software Engineering Institute (Paulk et al, 1995).
In the course of gathering the quantitative data, the consulting organization also gathered a significant volume of qualitative data. The qualitative data is often aimed at ascertaining the "maturity level" of the organization using the well-known capability maturity model pioneered by Watts Humphrey and published more recently by the Software Engineering Institute (Paulk et al, 1995).
In this case study assume that the client is currently a fairly sophisticated Level 1 group under the SEI capability maturity model (CMM) concept. That is the organization is still in Level 1 but on the cusp of Level 2, rather than being well back in the set of Level 1 organizations.
Incidentally, the quality levels shown in the case study correlate exactly with the Level 1 quality data shown in a prior report on Becoming Best in Class (Jones 1998). Table 2 shows the current quality results associated with the five levels of the CMM:
Table 2: Software Defect Potentials and Defect Removal Efficiency Targets Associated With Each Level of the SEI CMM
(Data Expressed in Terms of Defects per Function Point)
SEI CMM Levels
Defect Potentials
Removal Efficiency
Delivered Defects
SEI CMM 1
5.00
85%
0.75
SEI CMM 2
4.00
90%
0.40
SEI CMM 3
3.00
95%
0.15
SEI CMM 4
2.00
97%
0.06
SEI CMM 5
1.00
99%
0.01
These results are somewhat hypothetical at the highest levels, but from observations of organizations at various CMM levels seem to be within the range of current technologies from levels 2 through 4.
The Level 5 defect potential target, or lowering defect potentials down to 1 per function point, is the hardest target of the set, and would require a significant volume of certified reusable material of approximate zero-defect quality levels.
Moving up the SEI CMM scale brings to mind learning to play golf. For a beginning player, it can take several years before they are able to break 100, or shoot a round of golf in less than 100 strokes. In fact, most golfers never do break 100. This is somewhat equivalent to the observation that many companies who are initially assessed as being at Level 1 tend to stay at the level for a very long time. Indeed, some may never advance to the higher levels.
However, golfers that do manage to break 100 can usually break 90 in less than a year. This parallels the observation that moving up the SEI CMM ladder is quicker once the initial hurdle of moving from Level 1 to Level 2 is accomplished.
In general, the rate of progress is relative to an organization's status on their first assessment. Organizations that are well back in the "level 1" CMM zone have trouble getting started. Organizations that are fairly close to "level 2" status usually are more flexible and can accelerate their process improvement work.
Strength and Weakness Analysis The qualitative assessment results produced by Software Productivity Research include "strength" and "weakness" reports which discuss specific methodologies, development processes, tools, etc. in the context of whether the client organization is better or worse than typical patterns noted within the same industry. In the case study, the following pattern of strengths and weaknesses were presented to the client:
The qualitative assessment results produced by Software Productivity Research include "strength" and "weakness" reports which discuss specific methodologies, development processes, tools, etc. in the context of whether the client organization is better or worse than typical patterns noted within the same industry. In the case study, the following pattern of strengths and weaknesses were presented to the client:
Clients Strengths (Better than Average Performance)
Staff experience with application types
Staff experience with in-house development processes
Staff experience with development tools
Staff experience with programming language(s)
Staff specialization: testing
Staff specialization: quality assurance
Staff specialization: maintenance
Requirements analysis (Quality Function Deployment)
Change control methods
Design methods
Development process rigor
Customer support tools
Customer support methods
Maintenance release testing
Development testing
Since both the client company and the telecommunications industry have been building software applications for more than 35 years, they know a quite lot about standard development practices. For example the client had long recognized that change control was a key technology, and had stepped up to fully automated change management tools augmented by a change control board for major projects.
However, many companies tend toward weakness in the project management domain. Indeed, project management failures are often much more common and also more serious than development technology failures as causes of missed schedules or outright cancellations of software projects.
Another very common weakness is failure to use formal design and code inspections prior to commencing testing. This is considered to be a weakness because inspections are about twice as efficient as most forms of testing in finding errors or bugs. Formal design and code inspections can each average more than 60% in terms of defect removal efficiency, while most forms of testing are less than 30% efficient. Further, not only do inspections have an excellent record in terms of defect removal, but they also are synergistic with testing and raise the efficiency of standard test stages such as new function test, regression test, and system test.
Another common weakness is in the software maintenance domain. Although software development is often funded and supplied with state of the art tools, maintenance is not as "glamorous" as development and hence may lag. In the case of the client, none of the more powerful maintenance support tools were deployed; i.e. complexity analysis tools, code restructuring tools, reverse engineering tools, reengineering tools, etc.
Yet another common weakness or gap in software development is failure to move toward a full software reusability program. An effective software reuse program entails much more than just source code reuse, and will include many other artifacts such as design specifications, test materials, and also user documents and even project plans.
Although the client had a good track record for standard development practices and methods, there were some significant weaknesses visible in terms of project management, pre-test defect removal, and software reuse:
Client Weaknesses (Worse than Average Performance)
Project management: annual training in state of the art methods
Project management: cost estimating
Project management: quality estimating
Project management: risk analysis
Project management: schedule planning
Project management: lack of productivity measurements
Project management: partial quality metrics
Project management: lack of productivity metrics
Project management: incomplete milestone tracking
Quality control: no use of formal design inspections
Quality control: no use of formal code inspections
Maintenance: no use of complexity analysis
Maintenance: no use of code restructuring tools
Maintenance: inconsistent use of defect tracking tools
Maintenance: no use of inspections on enhancements
No reuse program: requirements
No reuse program: design
No reuse program: source code
No reuse program: test materials
No reuse program: documentation
No reuse program: project plans
As can be seen from the patterns of strengths and weaknesses, the client organization was pretty solid in basic development practices but somewhat behind the state of the art in terms of project management and advanced quality control methods, although the use of quality function deployment is certainly innovative. The management and quality problems, in turn, made software reuse questionable because reuse is only cost effective for artifacts which approach zero-defect levels.
A final weakness is failure to provide enough training for managers and technical personnel. Usually in the range of 10 to 15 days per year is an optimum amount for teaching new skills and for refreshing current skills. See the interesting work by Bill Curtis et al on the People Capability Model (Curtis et al 1995) for additional insights.
The combination of the assessment and baseline study took about two months to perform from the day that the contract was signed until the final report was presented to the lab director and his principal management team.
Incidentally, once the data has been presented to client executives and management, the next step is to present the findings to the entire development and maintenance community. The reason for doing this is to pave the way for a subsequent process improvement program. Unless the specific strength and weakness patterns are clearly stated and understood by all concerned, it is hard to gain support for process improvement activities.
It is always best to explain both the assessment, baseline, and benchmark results to the entire software community. Unless this is done, then social resistance to proposed software process improvements are likely. In general, people are not likely to respond well to change unless they know the reason for it. If a company is well behind the state of the art and competitors are visibly better, that is usually a strong motivation to experiment with software process improvements.
Beginning a Software Process Improvement Program Assume that the assessment and baseline study began in September and the final report was delivered in October of 1997. After that the client took two month in conjunction with the consulting group to develop a three-year process improvement program. The process improvement program was scheduled to kick off in January of 1998 and had the following targets:Assume that the assessment and baseline study began in September and the final report was delivered in October of 1997. After that the client took two month in conjunction with the consulting group to develop a three-year process improvement program. The process improvement program was scheduled to kick off in January of 1998 and had the following targets:
Three-Year Software Process Improvement Targets
Set aside 12 days a year for training in software management topics
Set aside 10 days a year for training in software process improvement topics
Establish a local software "center for excellence" to pursue continuous improvements
Budget $10,000 per capita for improved tools and training
Achieve Level 3 status on the SEI CMM maturity scale
No more than 5% difference between estimated schedules and real delivery dates
No more than 5% difference between estimated costs and actual costs
Raise defect removal efficiency above 97% as the corporate average
Reduce defect potentials below 3.0 per function point as the corporate average
Reduce development schedules or intervals by 50% from requirements until delivery
Raise development productivity rates by more than 50%
Reduce development costs by more than 40%
Reduce maintenance costs by 50% for first two years of deployment
Achieve more than 50% reusability by volume for design, code, and test artifacts
Establish an in-house measurement department
Publish monthly reports on software quality and defect removal
Publish overall results in an annual "state of the art" report for group executives
Although these are fairly aggressive improvement targets, they are all achievable within about a 36 months time span for companies that are not so bureaucratic that change is essentially impossible.
However, making significant process improvements does not come for free. It is necessary to set aside time for training, and also to budget fairly large amounts for improved training, improved tools, and other attributes.
Removing the Mystery of Software Process Improvement Although there are thousands of tools and hundreds of vendors making claims about improving software productivity by vast amounts, most of these claims have no solid empirical data behind them.
Although there are thousands of tools and hundreds of vendors making claims about improving software productivity by vast amounts, most of these claims have no solid empirical data behind them.
The basic approach to improving software productivity is actually fairly simple: Find out what your major cost and schedule obstacles are and then strive to eliminate them. For most large software companies and large software projects, the rank order of software schedule and cost drivers is the same:
Defect removal is the most expensive and time consuming activity
Producing documents is the second most expensive and time consuming activity
Meetings and communications is the third most expensive consuming activity
Coding is the fourth most expensive and time consuming activity
Project management is the fifth most expensive and time consuming activity
Thus it is obvious that if you want to improve productivity and cut your development schedules down to something shorter than today, you have to concentrate your energies on the top-ranked cost and schedule drivers; i.e. 1) Improve quality; 2) Control paperwork; 3) Improve communications.
For example, the distribution of software development expenses in the case study enterprise would approximate the following pattern shown in table 3:
Table 3: Development Expense Pattern for the Client Case Study
Activity
Percent of Development Cost
Defect removal
30%
Paperwork
22%
Meetings
19%
Coding
17%
Project management
12%
The only thing unusual about this pattern is the fact that older measurements based on the "lines of code" metric rather than function points would not usually focus on the high costs associated with quality control, paperwork, meetings, and project management. Since these activities cannot be easily be measured using lines of code metrics there is no clear understanding of the relative percentages associated with non-coding work. Thus for many years the costs associated with non-coding work have been understated in the software literature.
This gap in the metrics literature caused many companies to focus only on coding productivity, and to essentially ignore the higher costs associated with quality control, paperwork, project management and other non-coding activities.
For a company with this kind of expense pattern, it is obvious that software defects have to be both prevented and removed somewhat better than today. It is also obvious that paperwork costs have to be controlled, and this leads directly to the concept of reusing portions of specifications, plans, user manuals, and other paper-based documents.
However, reusing artifacts with a lot of bugs or defects is counter productive. Therefore the sequence of effective process improvement requires that quality control come before reusability. The normal sequence of process improvement is more or less the same for all companies:
Begin with an assessment and baseline
Start by upgrading management skills in planning and estimating
Start a measurement program to track progress
Improve software defect removal via formal inspections
Improve defect prevention via better requirements and design
Improve the maintenance process via complexity analysis and restructuring
Improve the conventional development process
Stock a library with high-quality reusable artifacts
Expand the volume of reusable materials
There are minor variations in this sequence, but management improvements and quality improvements are definitely the top priorities for the first year. Incidentally from analyses of many organizations, the general rates of software process improvement are now starting to be known:
Software quality can improve at 15% to as much as 40% per year.
Software development productivity can improve at rates of 5% to 20% per year.
Software maintenance productivity can improve at rates of 10% to 40% per year
Schedules can be shortened at rates of 5% to 15% per year
Reuse volumes can grow from < 20% to > 75% over about a five-year period
These rates of improvement can occur for four or five years in a row before they reach "terminal velocities" at which point improvements taper down and the results become stable again.
Because of the need to climb a fairly steep learning curve, the first year results are usually at the low end of the improvement range while the second and third years are usually nearer to the peak of the improvement range.
Incidentally, continuous improvement is not likely to occur unless there is constant pressure from top management and continuous focus on software process and tool issues. Since software personnel who are fully occupied on normal projects lack the free time for this purpose, the normal situation is to establish a "center for excellence" whose main task is software process improvement.
For the case study organization, whose total software employment is nominally about 1000, the size a local center for excellence would be about 10 people. However at a corporate level for organizations with perhaps 20,000 software personnel the overall number of personnel devoted to software process improvement would be in the range of 200 and they would probably be concentrated in one or two larger facilities rather than being distributed in small groups among every location.
For a discussion of the creation and evolution of the well-known ITT Programming Technology Center which served as the model for many software centers for excellence, refer to the author's "A Ten Year Retrospective of the ITT Programming Technology Center" (Jones 1988).
The First Year Goals and Accomplishments: 1998 In the first year of the process improvement program, the basic goals are to attack the most significant problems identified by the assessment and baseline studies. For this case study there were two pressing needs identified: 1) The need for better quality by means of formal inspections; 2) The need for better project management tools and methods, and especially for better planning and estimating methods.
In the first year of the process improvement program, the basic goals are to attack the most significant problems identified by the assessment and baseline studies. For this case study there were two pressing needs identified: 1) The need for better quality by means of formal inspections; 2) The need for better project management tools and methods, and especially for better planning and estimating methods.
It is also obvious that if you want to prove to higher management that your process is actually improving, you need to measure your quality and productivity results. The reason for this is to make a convincing case that things are getting better and your are not just spending money without achieving any results. Thus measurements must start at the beginning of the process improvement program. Indeed, you need an initial baseline of your pre-improvement status in order to validate that you are truly improving.
Table 4 shows the pattern of results that might be experienced in the first year of a successful process improvement program. Table 4 expresses the current situation in terms of percentages so all of the columns begin at 100%, except for the last column. The last column reflects the average volume of reusable material and at the time of the baseline analysis, only 10% of any given artifact, on average, was reused.
In the course of the first year, the specific targets set by the lab manager were to improve defect removal efficiency by 40% compared to the baseline, reduce defect potentials by 10%, increase development productivity by 10%, improve maintenance productivity by 25%, and increase the volume of reusable material up to 25% by volume.
Note that because of the need to take time out from work for classes, and to learn to feel comfortable with some of the new approaches such as design and code inspections, productivity will decline for about the first four to six months when first starting out on a process improvement program.
The initial reduction in productivity due to the steep learning curve associa
ADVERTISEMENTS
"Most people would sooner die than think; in fact, they do so."