Please login or join to subscribe to this thread
The first thing you have to take into account is Barry Bohem´s Cone of Uncertainty. He demostrated that projects "live" with +-10% of deviation from design (or the phase where things are more clear) to the end. It is valid for any type of project. So, I am working from long time ago including today in organizations where my personal annual performance has a component closely related to the deviation of initiatives. With that said, the key reason for hiring a project manager is managing the inherent risk each initiative has and will impact on getting the accpeted +-10% deviation. And that means you have to rise your hand with enough time to keep that deviation in control. If you rise your hand because in a point of the project you detect problems with cost-time-resource-quality-scope and push to create a change request and the change request is accepted and the project is rescheduled then nothing to say to you. But this point on time must not be when things are issues instead of risks.
I would expect that part of the change proposal / approval process is reviewing any change to schedule. And since the scope of the project changed, I would expect that "on time" would be defined according to the updated schedule after the change approval process.
(I am also implying that part of your change approval process would be an assessment of impact to schedule by either whoever proposes the change or a response by the project team of how schedule would be affected by the change, depending on how your organization is structured and who is making the change.)
In case of an approved change, in general a re-baselining of schedule is typically acceptable. (Although deviations to tasks which are not affected by the change would still be considered a variance and might not be re-baselined.)
So yes - on-time delivery would be measured against the updated baseline schedule (probably excluding any tasks which were not affected by the change but went off-course for some other reason).
p.s. depending on your organization, you may or may not want to "accept" variances which have already occurred at the time of re-baselining. This depends on the nature of your organization and who is asking the question.
It depends on why you are resetting the baseline.
If the customer changed the scope of the project, then it it outside the control of the project team, and you should be measured to the new baseline.
If the project team discovered that they missed a significant part of the original scope in the planning, and the baseline must be revised to include new statement of work due to the late discovery, the project team is at fault.
Unfortunately, on-time metrics in these situations can be complicated and don't tell the entire story. If you base performance on factors outside of someone's control, they tend to "game" the metrics. They can "complete" tasks on-time with low quality for it to count against their metrics. Then they must revise the "completed" deliverable which does not count against their on-time performance.
I've seen this working from team to team where Team A is always on-time, but usually requires significant rework. Team B is occasionally late, but almost no second effort is required. Which do you reward?
Approved scope change includes a recognition and acceptance of the impact of the scope change. That incudes the impact on cost and schedule. The evaluation of contract performance has to take into consideration the approved changes.
However, care must be taken not to include variances that have nothing to do with the change. Too often the 're-baselining' incorporates slippage. Change orders are sometimes initiated to cover for poor performance by identifying design evolution as changes in scope.
As a rule, no. The basis for approval of a capital investment project normally includes a fixed, risk-adjusted, no-later-than date for Delivery. Subsequent project changes - no matter how justified - don't turn back the clock on the investment decision. This kind of metric, of course, is most useful for informing future investment decisions, not for judging the performance of a particular project team.
Typically project objectives and constraints are based on assumptions taken at the time of the decision to proceed. These assumptions result from partial information and some guess work and include a certain amount of risk. As the project evolves knowledge is improved, assumptions validated or not and the project has to be re-evaluated. This re-evaluation should be scheduled, usually after specific phase completion (waterfall model). If the necessary adjustments fall within the risk allowance then the project may continue into the next phase. If not, one has to revert back to the initial business case and, in discussion with the significant stakeholders, re-commit, or, in some cases abandon. This is more than a scope change. A re-commitment could mean a revised project charter with a new baseline, budget and delivery adjustment - essentially a 'new project'.
Too often the project objectives and constraints are considered to be cut in stone with no opportunity to re-evaluate and re-commit (or abandon). This can result in ultimate project failure in all measurements of success - cost, time and scope.
In response to the original question:
"should the actual completion date be measured against the rebaselined date (rather than original schedule date) for purposes of reporting on time delivery?"
The answer is that the completion date should be measured against the original schedule including the original risk allowance. Same with project costs. One cannot separate the initial constraints assumptions from the risk associated with those assumptions.
If the required changes to the original project exceed the risk allowances and the project is changed after re-evaluation (becoming a 'new project') then the completion date should be measured against the 'new project' schedule including the 'new project' risk allowance.
This concept of "off-ramps" or "re-commitments" should be built in, especially in long complex projects that are exposed to changing conditions both in terms of implementation and suitability of the delivered product.
It depends on just why your organisation wants to measure the on-time completion metric. The problem with measuring completion against an unadjusted baseline is that you may be measuring nothing more useful than someone's ability to make accurate *forecasts*. An adjusted baseline is almost by definition going to be a more accurate, better informed one than an earlier baseline so why not use the latest baseline you have?
There may be four reasons for 'missing' the delivery date:
1) the initial schedule was wrong
2) scope changes adding execution time requirements,
3) external forces unknown at the time of the initial schedule, and
4) ineffective execution
A fifth may be a combination of the above.
A gap analysis comparing the initial schedule to the delivered schedule may help determine the cause of the late completion and assist with later initiatives (lessons learned).
However that being said, the real benefit of a schedule is as a tool during project implementation to identify possible weaknesses and develop recovery plans should this become necessary. If you use the actual as a baseline (constant updates) then the problems (cause) may never be recognized.
Past implementation, the intermediate and end deliverables need to remain synchronized. If you have an integrated plan where multiple items are developed in parallel and must have a given maturity state at the same time, their sub-tier plans must be aligned. Events occur which necessarily shift the timeline of deliverables. The impacts of those changes must be understood and accounted for across the various contributors.
I couldn't imagine trying to work a project where the plan is a year out of date and deliverables show up whether or not they are needed or correct because that's the old plan. Baselines are NOT updated constantly. They are updated judiciously because they form the basis for detailed planning, and a lot of work goes into ensuring they are integrated. If your basis is unsound, it's like trying to construct a building with no foundation. The whole thing falls over.
"...should the actual completion date be measured against the rebaselined date (rather than original schedule date) for purposes of reporting on time delivery?"
My response was an attempt to explain why comparing as-built with anything other than the original schedule may well miss some of the causes of the delays. I also proposed that if the project change was so severe (outside the risk envelop) as to require re-baselining maybe the project should be subject to re-evaluation. The end user of the project product may not be able to tolerate the proposed changes.
I also noted that the schedule is beneficial during project implementation - implementation to me means from start to completion (final delivery) of the project. I agree that this benefit includes the ability to coordinate tasks and intermediate deliverables.
I am not suggesting that the schedule not be updated to reflect reality however I object to these updates being considered 'baseline' and that these updated schedules be used for project performance measurement.
Please login or join to reply