Project Management

The Great Earned Value Versus risk management(i) Showdown

From the Game Theory in Management Blog
by
Modelling Business Decisions and their Consequences

About this Blog

RSS

Recent Posts

The Good PM/Bad PM Poker Game

The Management Fad Litmus Test

PMI®, The Movie

What Ever Happened To “Life-Cycle Cost Estimating?”

Avatar: The Last Project Bender



Regular members of GTIM Nation know of my disdain for risk management (no initial caps) as currently practiced; however, there is no truth to the rumor that I stated that it is a total and complete waste of time, it only serves to muddy the Management Information System (MIS) waters, or that I have compared its practitioners to members of the genus Mustelidae. I have, though, regularly stated that it fails two of my three criteria for valid management information systems, that they be:

  • Accurate,
  • Timely, and, perhaps most important of all,
  • Relevant.

Being the traditional kind of PM that I am, I think it’s clear that the standard methodologies of Earned Value and Critical Path represent valid systems, while risk management[i] doesn’t. But curmudgeonly blogger talk is cheap: what can we see in legitimate management science space that would convincingly point to the conclusion that risk management[ii] methods are objectively inferior to, say, Earned Value?

To set up this test, let’s first find a common output from each system. Risk management (I only used an initial cap on the word “risk” because it started the sentence) cannot tell you:

  • Cost variance,
  • Schedule variance,
  • Which Control Accounts or Work Packages are doing okay in cost/schedule performance space,
  • …and which are in trouble
  • …or by how much,

…all of which, the alert reader will realize, are highly relevant pieces of information. Conversely, Earned Value Management Systems cannot tell you:

  • Odds of speculated alternatives to the cost baseline actually occurring,
  • …or their estimated impact,
  • The appropriate amount to set aside for a contingency budget,
  • Or the “confidence interval” that the original cost baseline will not be exceeded,

…all of which, the alert reader will realize, are fairly irrelevant, with the possible exception of the third bullet (but even that is highly dependent upon the ability to accurately capture its underlying assumptions’ data, which is itself suspect). So, what relevant piece of information do both types of systems assert an ability to generate?

“I’ll take ‘Variances at Completion’ for $1000, Alex.”

 To be sure, this apparently crucial piece of information doesn’t come by the risk managers[iii] easily. In order to provide an accurate prediction of how much a given project will cost when it is complete, how long it will take, and compare those figures to the original baseline estimates, the following processes and consideration must come into play:

  • Assuming that the project’s original scope baseline was used to create its cost and schedule baselines, and that these latter two have a nominal “confidence interval” of 80%, it then follows that the amount of contingency derived from the risk baseline represents a not-to-exceed limit.
  • Further assuming that the risk analysis was performed at an appropriate level of the Work Breakdown Structure (WBS) – one would hope for the Work Package level, but Control Account-level is perhaps usable – the odds of each of the alternative scenarios to the one described in the WP are multiplied by the estimated impact (both cost and schedule), and summed at that same WBS level.
  • If any of the alternate scenarios occur, the amount of contingency assigned to that alternative is added to the nominal Budget at Completion to derive a new Estimate at Completion (EAC).
  • On the other hand, should a task receiving this type of analysis actually finish without a risk event occurring, the amount of contingency associated with that task is “released,” and the EAC remains unchanged.
  • If an event that impacts the project’s costs, unexpected by both the PM and the risk managers, occurs, it is classified as an “unknown unknown,”[iv] and its quickly-estimated impacts are added to the previous EAC.

Note that these steps are not scalable. The alternative scenarios either happen, or they do not. At that point, the entirety of their estimated impact is the “right” number, or it is not. The act of multiplying the impact amount by the speculated estimated odds of occurrence only provides useful information in the aggregate. Note also that most of these steps require the time and attention of some fairly proficient analysts.

So, how would an Earned Value system provide this same information?

  • Divide the estimated percent complete figure into the cumulative actual costs. The same thing works for duration – divide the percent complete figure into the number of days since the project’s start date.

Note that these steps are perfectly scalable. They will return a figure accurate to within ten points at whatever level of the WBS is being assessed. Note also that it requires no special expertise (to perform – setting up the original baselines do require some level of competence, but would need to be done for the risk managers anyway). A grade schooler could do it.

So, I put it to my readers, both Members of GTIM Nation and occasional: which information stream do you think is superior?

 

 


[i] No initial caps.

[ii] Ibid.

[iii] Ibid.

[iv] …which has to be one of the goofiest terms in all of management science.

Posted on: April 12, 2021 10:30 PM | Permalink

Comments (1)

Please login or join to subscribe to this item

Please Login/Register to leave a comment.

ADVERTISEMENTS

"You can't have everything. Where would you put it?"

- Steven Wright