GTIM Nation is well aware that, in my self-appointed role of defense-against-management-fads guard dog, I have snarled at, barked at, and thrown up on several popular initiatives, mostly contra risk management (no initial caps), misapplication of Generally Accepted Accounting Principles, and those who would eschew Earned Value’s capability to produce an Estimate at Completion (EAC) in favor of using burn rates, or re-estimating remaining budget and adding that figure to the cumulative actual costs. Despite my reasonable opposition, unfortunately, these practices have yet to be roundly denounced or, better still, jettisoned from and ignored by the community of management scientists writ large. So, when a management fad does actually receive the left-behind treatment, I think it’s illustrative to go back and see what all the original fuss was all about, how the fad gained traction, and what ultimately led to its demise, particularly and especially if the fad was largely germane to the Project Management world. Such an example is the “Life-Cycle Cost Estimating” craze.
Just so everyone’s clear on what’s being evaluated here, according to acqnotes.com,
A Life-Cycle Cost (LCC) is the total cost of a program from cradle to grave. (also referred to as Total Ownership Cost (TOC)) LCC consists of Research and Development (R&D) Costs, Investment Costs, Operating and Support Costs, and Disposal Costs over the entire life cycle. These costs include not only the direct costs of the acquisition program but also include indirect costs that would be logically attributed to the program. In this way, all costs that are logically attributed to the program are included, regardless of funding source or management control.[i]
My recollection is that Life-Cycle Estimating or Life-Cycle Cost first became a thing in the mid-1990s, and internet searches for papers on the topic show a sudden increase in titles published late in that decade. While perhaps noble in purpose, its stated goal of returning the “total cost of a program from cradle to grave”[ii] is impossible, as a couple of thought experiments will demonstrate.
Consider two (American) football stadiums: Soldier Field in Chicago, home of the Chicago Bears, and the Seattle Kingdome, home (for a time) to the Seattle Seahawks. I’ll contrast them so[iii]:
To further quantify the differences between these two
Soldier Field: $64,935,680
Seattle Kingdome: $15,180,000
…or, a delta of $49,755,680, which represents a whopping 62% advantage to Soldier Field, and even that figure gets larger each year Soldier Field stays open. I wonder what the respective Life-Cycle Estimators would have had to say at the project’s kickoff meeting had they been asked about each facility’s Return on Investment.
The second thought experiment that I would like to propose is based on the parameters needed to generate the “Life Cycle Costs.” From the definition in the first paragraph, these include:
Prior to project initiation, none of these improbably reduced number of parameters can possibly be known, or even estimated to any reasonable degree of precision (“…regardless of management control”? Puh-leeze). And yet, here’s the Life-Cycle Estimating crowd not only laying claim to an ability to perform such a capture, but actually asserting that such an “analysis” ought to be part of PM strategies going forward.
So, how did Life-Cycle Estimating become popular? I think it’s because of its implied (?) claims to be able to reasonably quantify far into the future, an ability that would automatically enrich any of its practitioners. With such a potentially beneficial capability, sprinkled with officious-sounding jargon, how could it not gain immediate and widespread recognition, if not complete acceptance, within the management science realm?
Did Life-Cycle Estimating end up going away, like other management fads? Sort of. There are still recent papers being published (not to be confused with a recent offering from my esteemed colleague Elizabeth Harrin, who blogged about the life-cycle of the estimate itself), but it seems to me that it doesn’t receive nearly the attention it did a decade or two ago. It may well be that it’s finally dawning on people outside of GTIM Nation that the future cannot be quantified. Not by risk managers, and not by estimators.
It’s why the term “precise estimate” is an oxymoron.
[i] Retrieved from https://acqnotes.com/acqnote/careerfields/life-cycle-cost-estimate on May 9, 2021, 18:38 MDT. This source cites the source of the quote as the Defense Acquisition Guidebook (DAG).
[iii] Wikipedia contributors. (2021, May 7). Kingdome. In Wikipedia, The Free Encyclopedia. Retrieved 01:57, May 10, 2021, from https://en.wikipedia.org/w/index.php?title=Kingdome&oldid=1021985574 and Wikipedia contributors. (2021, May 9). Soldier Field. In Wikipedia, The Free Encyclopedia. Retrieved 01:58, May 10, 2021, from https://en.wikipedia.org/w/index.php?title=Soldier_Field&oldid=1022224914
Avatar: The Last Airbender was an American animated television show that first aired in 2005. In the show’s world, the map is divided among four nations, each associated with a basic element – earth, air, fire, and water. Certain rare individuals within these nations can “bend” those elements, which most closely resembles telekinetic manipulation. The protagonist, an airbender named Aang, is a boy who also happens to be the Avatar, or the once-in-a-generation person capable of bending all four elements. His world is in a state of conflict, as the Fire Nation has taken advantage of the Avatar’s apparent absence to launch a campaign to rule all of the other nations. Aang must quickly master all of the other elements and confront Fire Lord Ozai before he realizes those goals. My then-teenaged son got me interested, and then hooked on this series, and I’m glad he did. It’s well-written, beautifully drawn, and the voice actors are exceptional (fun fact: Fire Lord Ozai was voiced by Mark Hamill). The opening sequence of each episode synopsized the benders’ world in a monologue voiced by the Katara character, so:
Water. Earth. Fire. Air.
Meanwhile, Back In The Project Management World…
In the Project Management world of long ago, there were eight nations – four primary, four secondary. The four primary nations were … well, let me put it this way:
Scope. Cost. Schedule. Risk.
Long ago, the four nations lived together in academic harmony. Then, everything changed when the risk managers (no initial caps) asserted supremacy, as evidenced by the following definition of Project Risk Management by toolshero.com:
Project risk management is the process that project managers use to manage potential risks that may affect a project in any way, both positively and negatively.[i]
(Note: toolshero.com isn’t the only place where risk management[ii] is so defined.) Only a PM Avatar, master of all four management aspects, could stop them, and return an overarching structure with proper proportion and perspective to the management world.
The keen observer (the entire population of GTIM Nation) will note that the toolshero.com definition excludes absolutely nothing in the management realm – hence the analogy that the risk managers were attempting (I think they’re still at it) to assert that all management fell under their purview, similar to the obviously flawed notion that the point of all management is to “maximize shareholder wealth,” the banner under which our friends, the Asset Managers previously attempted to take over the management world.
So, what is it that makes the one who has mastered the PM basics almost magically powerful in comparison to the other management-style benders? I think it has to do with the ossified codex of business rules with which the non-PM-types have been burdened by their business school professors. The rules that these carry around with them, like Jacob Marley’s ghost’s chains, include:
Conversely, Project Benders are fully aware that:
Unlike their Last Airbender counterparts, Project Benders don’t employ the flowing, martial arts-inspired gestures when they perform their magic (at least not the ones I’ve seen). But make no mistake: as PM-centric techniques and theories continue to gain ground in the realm of Management Science, much of what passes for legitimate practices will be challenged, proven to be sub-optimal, and eventually overcome.
I’m just hoping that, when the animated version of this story comes out, my character will be voiced by Mark Hamill.
[i] Retrieved from https://www.toolshero.com/project-management/project-risk-management/ on May 2, 2021, 15:40 MDT.
[ii] No initial caps.
When an author attempts to advance a theory under the auspices of Management Science using obvious sophistry, I immediately know two things:
Of course, many instances of sophistry are difficult to detect, at least right away. Not so the case with the straw man argument, no siree. This is where the theoretical position opposite the assertion being advanced is set up, but not accurately, then criticized to oblivion. And this is exactly the tack being taken with much of the advancement of earned schedule (like “risk management,” I refuse to use initial caps for this term).
What is “earned schedule?” Well, from the self-proclaimed “official site for Earned Schedule,” we see this:
Earned Value Management (EVM) is a wonderful management system, integrating in a very intriguing way, cost …schedule …and technical performance. It is a system, however, that causes difficulty to those just being introduced to its concepts. EVM measures schedule performance not in units of time, but rather in cost, i.e. dollars. After overcoming this mental obstacle, we later discover another quirk of EVM: at the completion of a project which is behind schedule, Schedule Variance (SV) is equal to zero, and the Schedule Performance Index (SPI) equals unity. We know the project completed late, yet the indicator values say the project has …perfect schedule performance!![i]
In just these five sentences we can begin to see the vacuousness of its underpinnings. Quick question: why does the author feel the need to refer to EVM as “wonderful” and “intriguing?” Also in the first sentence, the author doesn’t understand how to use ellipses, twice interjecting them for commas, and don’t get me started on multiple exclamation points. Does that last objection sound like nit-picking? Maybe so. I just have a hard time taking advice on how to better manage hundreds of thousands of dollars’ worth of projects (if not millions) from a person who hasn’t mastered eighth-grade English.
The third sentence points out that Earned Value Management (EVM) measures schedule performance in units of cost. Umm, yeah, that’s what EVM does. If you want to measure schedule performance in terms of time, which earned schedule claims to do, then the traditionalists would turn to – Critical Path! That’s what CPM does. In another document, generated by a certain guidance-document-generating organization that I refuse to name, the example given for earned schedule has to do with reducing plans to meet friends for dinner to PM terms. The story problem stipulates that, at your current rate of performance, you will be late to a dinner with friends, and goes on to mock the idea that you would contact them to announce that you will be X “dollars late.”
But this is a straw man argument, and, therefore, invalid. When an EVM system returns that one will be $X late, it doesn’t mean that time equals money (though some would argue the contrary). It simply means that, if you want to arrive on-time to your dinner date, you should be prepared to spend $X over and above what you had originally budgeted to make that happen. Variances at Completion expressed in units of time are not properly derived from an EVMS – again, they come from Critical Path Methodology systems. It is, in fact, their raison d’etre. And yet the earned schedule crowd seems to assert that this new technique has bridged some previously-unsolvable management information gap. It’s simply not so.
Another straw man argument from the excerpt above challenging EVM and its schedule performance metric has to do with the fact that the Schedule Performance Index (SPI, or the cumulative Earned Value amount divided by the time-phased budget), in a “quirk,” moves toward 1.00 as the project comes to completion, regardless of whether or not it is finishing on-time. Again, this is an irrelevant observation. The SPI simply measures progress against the baseline; of course it’s going to close in on 1.00 as the project nears completion. There’s no “quirk” about it. You want a performance measure that compares a projected finish date to the original baseline date? That comes from the CPM system, and for experts to blithely fail to take this into account is sophistry.
I understand that the concept of earned schedule has received a lot of attention and accolades in the PM world. So has risk management. So has communications management. I also understand that it only took a random Cairn Terrier to pull back the curtain on the Wizard of Oz.
In the Steve Martin movie The Man With Two Brains, Martin’s character, Dr. Hfuhruhurr (you’ll need to see the movie to hear how it’s pronounced) is pulled over by the Austrian police while driving at excessive speeds. They perform a field sobriety test that includes the following steps:
…all of which Dr. Hfuhruhurr performs successfully, while commenting “&^%$* your drunk tests are hard!”
Prior to seeing this movie I had never heard of the Katalina Matalina song, but I understand it’s fairly familiar to school-age children. The chorus goes like this:
Hoca poca loca
Was her name.
The verses are hardly better. I would go on, but I’ve probably already sunk the Flesch-Kincaid Grade Level Readability Calculator score for this blog to levels so low that the ProjectManagement.com editorial staff may automatically reject it.
Meanwhile, Back In The Project Management World…
As our Earned Value and Critical Path Methodologies (EVM/.CPM) cost/schedule control systems go hurtling down the Project Management Information highway, they will sometimes attract the attention of PM constables who will pull them over and politely but authoritatively ask them to perform a few simple tests to determine their validity. It’s easy to see how a lot of these tests go directly to system efficacy, such as:
…among others. And, while the PM constables are usually very polite, the clear implication is that, should the Project Management Information System being evaluated happen to fail these tests, the take-away would be that the Project Team is either incompetent, deceitful, or both.
Aiding these PM constables in their duties are software tools that can scan large CPM networks or EV systems. This is all well and good, but I have to ask: what happens when a one-size-fits-all software intended to check system integrity is run against the Earned Value or Critical Path Methodologies-based systems associated with a-typical projects?
Here’s the situation using the Game Theorists’ favorite tool, the Payoff Grid:
In Scenarios A and D, the system integrity-checking software has performed as intended, and needs no further evaluation. However, if there are genuine problems with the PMIS being evaluated, and the software doesn’t pick up on it (Scenario B), then it looks really bad for that package, particularly if the subject project ends up overrunning or coming in rather late, with no early warning from the PMIS. In those instances where the PMIS is really okay, but the software came back with a list of errors, the natural inclination is for the Project Team is to chase those to ground. Prior to this forensic analysis and pursuit of the remedies, one question should be asked: are all of the checking software’s tests relevant?
Consider, for example, the old saw about how, if the number of activities (or percent of their budgets) using the Level-of-Effort method to claim their Earned Value amounts is over anywhere from 5% to 15%, this is indicative of error. I understand the value of using the more discreet methods of claiming EV, such as direct units or weighted milestones, when plausible. But for those projects that are more service-oriented than others, LOE will almost certainly be the most appropriate EV method for a plurality, if not a majority of its activities. Evidence for this assertion lies with the fact that, ironically, the Project Management task in almost all projects is invariably tracked using LOE, meaning that it’s entirely possible that the PM consulting firm performing a baseline integrity review using one of these software packages wouldn’t get a passing grade for their own PMIS.
Don’t misunderstand – I’m all for PMIS integrity, and for the software tools available that can help attain it. I’m also in favor of the detection of drunk drivers in Austria. I just think that having to walk on my hands on a straight line to establish sobriety is a bit excessive.
Besides, I don’t even know all the words to the Katalina Matalina song.
Regular members of GTIM Nation know of my disdain for risk management (no initial caps) as currently practiced; however, there is no truth to the rumor that I stated that it is a total and complete waste of time, it only serves to muddy the Management Information System (MIS) waters, or that I have compared its practitioners to members of the genus Mustelidae. I have, though, regularly stated that it fails two of my three criteria for valid management information systems, that they be:
Being the traditional kind of PM that I am, I think it’s clear that the standard methodologies of Earned Value and Critical Path represent valid systems, while risk management[i] doesn’t. But curmudgeonly blogger talk is cheap: what can we see in legitimate management science space that would convincingly point to the conclusion that risk management[ii] methods are objectively inferior to, say, Earned Value?
To set up this test, let’s first find a common output from each system. Risk management (I only used an initial cap on the word “risk” because it started the sentence) cannot tell you:
…all of which, the alert reader will realize, are highly relevant pieces of information. Conversely, Earned Value Management Systems cannot tell you:
…all of which, the alert reader will realize, are fairly irrelevant, with the possible exception of the third bullet (but even that is highly dependent upon the ability to accurately capture its underlying assumptions’ data, which is itself suspect). So, what relevant piece of information do both types of systems assert an ability to generate?
“I’ll take ‘Variances at Completion’ for $1000, Alex.”
To be sure, this apparently crucial piece of information doesn’t come by the risk managers[iii] easily. In order to provide an accurate prediction of how much a given project will cost when it is complete, how long it will take, and compare those figures to the original baseline estimates, the following processes and consideration must come into play:
Note that these steps are not scalable. The alternative scenarios either happen, or they do not. At that point, the entirety of their estimated impact is the “right” number, or it is not. The act of multiplying the impact amount by the
So, how would an Earned Value system provide this same information?
Note that these steps are perfectly scalable. They will return a figure accurate to within ten points at whatever level of the WBS is being assessed. Note also that it requires no special expertise (to perform – setting up the original baselines do require some level of competence, but would need to be done for the risk managers anyway). A grade schooler could do it.
So, I put it to my readers, both Members of GTIM Nation and occasional: which information stream do you think is superior?
[i] No initial caps.
[iv] …which has to be one of the goofiest terms in all of management science.