Game Theory in Management

by
Modelling Business Decisions and their Consequences

About this Blog

RSS

Recent Posts

On Throwing Henrietta Into The Flames

Integration, Smintigration

The Ultimate PM Quality Test

Project Management Information Systems’ Fatal Quality Flaw

Okay, Who’s Actually AGAINST Quality?

On Throwing Henrietta Into The Flames

In the 1954 movie Around The World In 80 Days, protagonist Phineas Fogg has embarked on a steamer from New York towards Europe and the final leg of his epic voyage, but there’s (yet another) problem: the steamer Henrietta doesn’t have enough fuel on board to complete the voyage at the speed that Fogg needs to maintain. Fogg’s solution is brilliant: once the coal is exhausted, he offers to purchase the ship, and then dismantles it en route to feed the boilers. He has the crew remove any piece or part that falls under the following criterion:

  • It’s flammable,
  • It does not add to the vessel’s seaworthiness,
  • Nor does it have anything to do with its propulsion system.

As it turns out, the ship’s figurehead, a wooden statue of a woman (who’s apparently been nicknamed “Henrietta”) qualifies, and is detached for feeding into the furnaces, leading the ship’s first mate (played to perfection by Andy Divine) lamenting “Not Henrietta!”

Meanwhile, Back In The Project Management World…

GTIM Nation members may not be engaged in a massive bet to circumnavigate the globe on a schedule more aggressive than thought possible for the current state of transportation, but there are many parallels to what we do to the movie Around the World in 80 Days, and to this scene in particular. Consider what you would do if, say, as the head of your organization’s Project Management Office (PMO), a PM comes to you and asks for help in setting up her Project Management Information System (PMIS), for a moderate-sized, high-profile project, but with two caveats: she needs her cost/schedule measurement systems set up very quickly, but doesn’t have very much budget set aside for that part of the work. Referring back to my “quality, availability, affordability, pick any two” axiom, she is communicating that she needs a system that fulfills the latter two. What’s your implementation strategy going to be?

I believe that it’s rather self-evident that a basic Earned Value System must be set up first. After all, even a crude EVM can deliver critical cost AND schedule performance information with a minimum of cost or time. Assuming a functioning Work Breakdown Structure is available, and our friends the accountants have agreed to set up the General Ledger to accumulate actual costs based on the reporting level of said WBS, the only other pieces of data needed to make the EVM functional and effective are the total budget for each of the tasks at the reporting level of the WBS, and an estimate of the percent complete for those tasks as of the end of each of the reporting periods. Work Package and Control Account Managers tend to be very busy, but a simple percent complete figure for their work is probably the easiest thing they have to do each month.

Next up, if time and budget is available, would be the data needed to set up a Critical Path network. This requires the additional data points of Work Package (or the activities that make up the WP) duration, plus which other activities need to be completed prior to the initiation of the one(s) being captured. With these durations and schedule logic links, a Critical Path network can be established, which can predict the project’s duration as well as pinpoint which activities have float, and to what extent. Piggy-backing off of the percent complete figures already being made available for the Earned Value part of the system, the CPM-based schedule can also provide accurate and reliable projections for project duration.

As far as I’m concerned, these are the basics. They have the added advantage of being fairly easy and inexpensive to create and maintain, and pass along the most crucial information elements needed for successful Project Management: how the project is performing in cost and schedule space. But what about all of those other management information streams that the experts maintain must also be set up, like risk analysis, or bottoms-up Estimates at Completion (EAC)? Well, let’s take a look at those, shall we?

As I’ve noted in previous blogs, a calculated EAC is both more accurate and far easier to generate than the so-called bottoms-up variety, which entails re-estimating the remaining work, and adding that figure on to the cumulative actual costs. Re-estimating the remaining budget – essentially, recreating the cost baseline (and turning the previous one to rubber) – requires professional estimators, using off-the-shelf software, in order to approach a 15% accuracy rate. Conversely, almost anybody who has graduated 4th grade can divide the cumulative percent complete into the cumulative actuals, which consistently returns a 10% accuracy rate.

Then we have my favorite foils, the risk managers. To perform a risk analysis worthy of the name, the following data points must be collected, almost always from the very Work Package Managers you need actually performing scope:

  • Alternative scope scenarios to the ones in the existing cost and schedule baselines (yes, you read that right: a proper risk analysis can’t even begin without already-established scope, cost, and schedule baselines),
  • The odds of each of those scenarios coming about (inevitably, speculation, or out-and-out guesses),
  • The cost impact of each scenario (again, you’ll need professional estimators with COTS estimating software to approach 15% accuracy),
  • …and the duration impact of each scenario.

After your risk analysts have collected all of this data, and have either chewed on it themselves or loaded it into some other software to crunch, what does it return? A list of things that the PM should be worried about, expressed in Gaussian jargon. That’s it.

Meanwhile, Back On Board The S.S. Henrietta…

So, we’re looking over this fine ship, and realizing we won’t get to on-time, on-budget delivery without shedding a lot of extraneous weight not-truly-needed features from the PMO function. Do I have to say it? The bottoms-up EAC-insisters and the risk management crew require much data (which can only really be supplied by the WP and Control Account Managers), take a lot of time and money, and ultimately deliver either irrelevant information, or else inaccuracies, while the correct answers can be derived much more quickly and cheaply. The which-parts-of-the-PMO-information-stream -needs-to-go decision becomes rather easy.

Throw that figurehead into the flames. And don’t think twice.

Posted on: June 10, 2019 10:53 PM | Permalink | Comments (2)

Integration, Smintigration

Much has been made of the issue of project baseline integration. And when I say “much,” I mean virtual tons of pixel ink that need not have been sacrificed on the altar of this suspect quality metric. One of those guidance-generating organizations that I consistently refrain from naming has made baseline integration pretty much the basis for assessing the quality of any Critical Path Methodology schedule network in its relation to the same project’s Earned Value Management System time-phased budget.

For those members of GTIM Nation who are unfamiliar with what I’m talking about, here’s a quick primer. For (what are considered to be) highly robust Project Management Information Systems, the development of the Performance Measurement Baseline (PMB) is a big deal. A typical sequence involves defining the scope fairly precisely, and then decomposing it into a Work Breakdown Structure (WBS). Once the scope has been defined down to the Work Package level, the estimators, schedulers, or both are brought in to excessively pester the WP managers as to (in the case of the former) which resources are going to be needed to complete the activity, and (for the latter) how long it’s expected to take, which activities need to be completed prior to the start of the one under the microscope, and is the WP manager aware of which activities need this one to complete prior to starting? If both the estimator and scheduler are either not present or not the same person, this is the first opportunity for “baseline integration” to become an issue. Some “experts” will insist that the cost estimate must be derived first, and then loaded into the CPM software for time-phasing. Others will insist that the schedule be established first, and then resources added (from an organization-wide and approved Master Resource Dictionary, no less) to derive the cost estimate. I have seen adults shouting about this very disagreement. No, I am not making this up.

Of course, even if the relevant (and even the irrelevant) parameters of the two baselines line up at the start, this only means the PM’s problems are just starting. You see, most estimating packages assume an annual adjustment to rates to accommodate inflation; however, Master Resource Dictionaries can be updated at any time, usually quarterly. After three months, the cost estimate as represented in the estimators’ version of the Cost Baseline suddenly doesn’t exactly match the time-phased budget as represented in the Schedule Baseline. The baselines aren’t integrated! It’s PM Armageddon!

If GTIM Nation thinks this is an absurd state of affairs, wait ‘til I tell you about what happens when a Baseline Change Proposal (BCP) is submitted. Assuming the BCP is reasonable, its changes to the Cost and Schedule Baselines are expected to happen almost immediately. This means an almost instantaneous and thorough update to those documents that retain information on the affected Work Packages, and the Cost Accounts that they impact, on up the WBS chain. A short list of these documents includes:

  • The affected WPs themselves, including
    • Scope description
    • Cost estimate
    • Schedule parameters
    • Selected method of ascertaining percent complete
  • Their “parent” Control Accounts
  • The WBS Dictionary
  • The risk management plan
  • The Change Control Log
  • The Contingency Budget (if applicable)
  • Cost Performance Reports
  • Schedule updates
  • Variance Analysis Reports…

…and I’m not joking about this being the short list. The kicker? None of it changes cost and schedule performance system quality. None of it. I can prove it.

Let’s do a little thought experiment, shall we? Posit that a project has both a Cost Baseline and a Schedule Baseline, loaded into an EVMS and CPM software, respectively. But, while these two software platforms could communicate with each other, they don’t, because the head of the Cost Performance analysts can’t stand the head of the Schedulers, and vice versa. Let’s further assume that the Earned Value baseline is wildly optimistic, with its Estimators placing the Budget at Completion at $5M (USD). In reality, the project will come in at $10M, but the Cost guys are spending too much time pranking the Accountants, and can’t be bothered to cross-reference their estimates. Conversely, the Schedulers are spot-on with their original Critical Path Network, and show a project completion date of the Start Milestone plus one year. The baselines aren’t even remotely “integrated,” and any comparison of their parameters will quickly reveal this.

So, our little thought experiment project gets underway, and three months along both the Cost and Schedule teams pull status (i.e., glean the percent complete of each WP from their managers). Everybody’s on-schedule, so the overall project’s percent complete is compiled at 25%. Since 25% of the project’s actual time has gone by, the Schedule Team doesn’t have to explain any surprises. Conversely, the Cost Team is comparing their Earned Value (BAC * % Complete, $5M * 0.25, or $1.25M) against a cumulative actual cost of $2.5M. Flawed as it is, the Cost Baseline is now reflecting an Estimate at Completion, based on the derived Cost Performance Index of 0.5, of an overrun of $5M, accurately forecasting the $10M total cost. The fact that the Cost Baseline and Schedule Baseline were wildly inconsistent with each other had no impact whatsoever on each system’s ability to predict the projects ultimate at-completion parameters. Indeed, to assert that the baselines must be in complete agreement is to insist that an EVMS and a CPM Schedule Network cannot be independently operating on the same project, which is clearly absurd.

Now, don’t get me wrong – I’m not saying that the Cost and Schedule Baselines ought not to be completely consistent with each other. What I am saying, though, is that, should they disagree in part (or even entirely), it does not represent a cataclysmic loss of system quality.

Posted on: June 03, 2019 10:24 PM | Permalink | Comments (4)

The Ultimate PM Quality Test

I love reading Nassim Taleb’s books. I’m currently devouring Skin In The Game (Random House, 2018), where Taleb passes along this gem (among many): on the basalt slab that contains the Code of Hammurabi, it is written that if a builder builds a house for someone, and that house later collapses and kills its owner, then the builder shall be put to death. This brings a couple of things to mind:

  1. Although it’s next to impossible to imagine what life was like 3769 years ago, I think it’s reasonable to infer that the presence of this law on the basalt tablet that essentially contained the entire codex of civilization’s guidelines indicates that catastrophically poor house construction was a real problem. Seriously – consider the last time you perused a major university’s law library. Now think about the reduction of the entire collection to 282 laws, and the significance of this kill-the-poor-quality-house-builder law making that final cut.
  2. Whaddaya wanna bet house construction quality improved significantly after 1754 B.C.?
  3. Since we don’t go around killing the principals involved in failed residential construction projects, what has taken that practice’s place? That would be the Municipal building codes. But this implies that, in exchange for demonstrably attaining exacting quality standards set forth in local law, that the builders can’t be (legally) killed for subsequent fatal failures. So, the next time GTIM Nation Members have a contractor, say, make an addition to your house, and the local code inspectors appear to rubber-stamp approvals without rigorous inspections, it doesn’t mean your construction is of high quality. It just means you have limited legal recourse if the new structure collapses.

Meanwhile, Back In The Project Management World…

According to zdnet.com[i], 68% of Information Technology (IT) projects fail. And before the non-IT PM members of GTIM Nation roll their eyes and think “Yeah, well, that’s why they invented Agile/Scrum,” consider: virtually all of the management information you use to make the decisions that allow your projects to succeed is the product of one of the residual 32% of IT projects. Critical Path and Earned Value Management Systems don’t simply fall off the management information system tree into their consumers’ outstretched hands. Even if the existing systems appear to be sustainable, it doesn’t necessarily mean that they were successful. There must have been quite a few Babylonians who moved into seemingly sound houses who were ultimately disappointed (or dead) in order for the kill-the-builder law to have entered the Code of Hammurabi, after all.

So, how do we know if our Project Management Information Systems (PMISs) are of reliably quality? There are a bunch of software programs out there that purport to conduct a quality control check of Critical Path networks or EV systems’ output, and in some cases they do a pretty good job. I can’t say “most cases,” because I am not convinced that in most cases they perform a relevant function. Some of the systems that I find to be particularly disappointing perform functions such as checking for the number of start-to-start logical links among activities in a Critical Path network, and express the results as a percentage. For some reason, the conventional wisdom has averred that a percentage of such logic ties above a certain (rather low) threshold should be considered evidence of poor quality. Why? Because of the narrative that a high percentage of start-to-start logic links carries with it an enhanced possibility of returning artificial schedule variances.

Do I have to say it? This is just plain wrong, an attempt to slather on yet another layer of complexity in the name of PM maturity or quality. Clearly the primary quality test for any PM information system is whether or not the information system provided prior warning of a task at the reporting level coming in late or overrunning its budget. Now, had some researcher conducted an analysis of hundreds of projects that had actually come in late or overrun, and had traced a common pathology of their PMISs to artificially-generated positive schedule variances masking real negative ones until it was too late to efficiently correct[ii], then I would have no objections. But if that’s what happened, this researcher has not shown his work. Here’s the kicker: if this speculated fear were true, it would mean that those projects with “excessive” start-to-start logic ties would tend to report artificial negative variances, as those activities that could start with others actually didn’t. It’s analogous to the ancient Babylonian house appearing to be on the verge of collapse without ever actually losing structural integrity, and the law changing to “Whomsoever builds a house that the owner perceives is about to collapse, causing death, but never actually falls down or injures anyone, well, that builder should be put to death anyway.”

Well, enough of this “put to death” business. In its place, though, can we have the start-to-start logic ties Ringwraiths replaced with proponents of a real QC standard, like “did the PMIS accurately predict the overrun or delay?”

 


[i] Retrieved from  https://www.zdnet.com/article/study-68-percent-of-it-projects-fail/ , which itself cited an article entitled The Impact of Business Requirements on the Success of Technology Projects; however, the link doesn’t appear to work, 9:39 MDT 27 May 2019.

[ii] Similar to the approach that the excellent David Christensen used when researching the stability of projects’ Cost Performance Index in his now famous study.

Posted on: May 27, 2019 10:24 PM | Permalink | Comments (6)

Project Management Information Systems’ Fatal Quality Flaw

The primary information systems that allow PMs to make, well, informed decisions are predicated on two methodologies: Earned Value for cost and Critical Path for schedule (anyone expecting me to include the risk management guys in this list hasn’t been reading this blog for very long). While many pieces of insight flow from these systems when they are operating properly (or even a little improperly – these systems have amazing self-correcting capabilities), their primary outputs deal with the ability to accurately forecast when your project will finish up, and how much it will cost when it does. Because of these capabilities, those PMs who eschew EVM or CPM rarely stay within the ranks of management for very long, being easily out-performed by their better-informed competitors.

However, like any other Management Information System (MIS), Critical Path and Earned Value are susceptible to what is colloquially known as the garbage-in, garbage-out phenomena. As my regular readers know, I maintain that all valid MISs have the same basic architecture, comprised of the following three steps:

  1. Data is gathered based on a certain discipline, e.g., accountants need a handle on all expenditures and incomes to make a general ledger work.
  2. The data is processed into information, based on some methodology. For (again) accountants, this involves properly identifying what place the transaction has in the ledger, and entering it as either a debit or a credit, and, at a set time, compute the totals of the entries and compare ledgers.
  3. The information is then delivered to decision-makers in a way that they can readily understand it. If a manager doesn’t know the difference between a balance sheet and a profit-and-loss statement, being fed information from the finest accounting system in the world loses much of its utility.

Based on this basic, three-step MIS architecture, any information system that messes up Step 1 is going to be vulnerable to the information delivered in Step 3 being inaccurate and, therefore, not actionable.

So, is there an aspect of PM that can have an influence on the quality of the data being collected for Earned Value or Critical Path analysis? Sadly, yes, and this aspect is captured in the old, cynical observation “All projects proceed on-time to the 95% complete point, and then stay at 95% complete forever.”

The percent complete parameter is central to both EVM and CPM. It’s how they produce all of their usable information. In the original Cost/Schedule Control System Criterion there were seven approved methods for collecting the percent complete figure:

  • Direct Units is the most objective. If your project is to make 100 widgets, and you need to know what your percent complete is, just go and count the completed widgets. This applies to square feet of concrete poured, or linear feet of pipes installed, or any other scope that can be directly measured.
  • Apportioned Units has to do with a proportional relationship with other, direct units. If each of your widgets needs two gonculators to make a complete unit, then measure the gonculator task on a two-to-one basis with the widgets.
  • For very short-term tasks, the 0 – 100 method is often used. You’re either completely done with the task, or you can’t claim any progress against it.
  • For tasks that should finish within two reporting cycles, the 50-50 method can be useful. You can claim either 50% complete, or 100% complete. No other percent complete figures may be used.
  • Weighted Milestones are a bit more subjective, but still highly useful. This is where the PM decides the percentage weight of interim milestones within the task, such as, in performing an analysis, the initial investigation is worth 15%, first draft of the report equals 40%, second draft is 65%, submission of the report to the customer is 85%, and customer approval is 100%.
  • Level-of-Effort is kind of cheating at the percent complete game. The amount claimed as having been accomplished is set equal to the cumulative amount of the time-phased budget. This technique is set aside for those activities that don’t really have tangible output, like (ironically) the project management task.
  • Okay, here’s the fatal percent complete technique, the Milestone Estimate. This is where you contact the Control Account Manager (CAM), and request an estimate of the percent complete as of the end of the reporting period. This is also known as the “liars’ method.”

If one of your CAMs is manifesting the behavior of passing along a percent complete that is known to be overstating progress in order to avoid the scrutiny that would otherwise come from honestly reporting a negative variance, then neither your EVM nor CPM will deliver accurate information about the impacted Control Accounts.

Fortunately, there are two effective fixes (and one hack) for this fatal MIS quality issue. First, minimize the number of Control Accounts or Work Packages that use the Milestone Estimate method. Level-of-Effort is routinely avoided, but even it doesn’t present the system-blasting capabilities of a mis-used ME technique. Second, only allow highly-trusted CAMs to be in charge of tasks being measured with the ME technique. And when I say “highly-trusted,” I do not mean “well-regarded.” I mean, if the CAM in question has ever indulged in that whole “projects proceed on schedule to 95%, and stay at 95% forever” business, they are removed from the trusted ranks forever. The hack I referred to is this: never allow any task to claim over 85% complete until it is completely done. I know, I know, this will introduce a few artificial negative cost and schedule variances in to some long-lived tasks.

But it will also give you, the PM, a usable warning when one of your Cost Account Managers have been gaming the Milestone Estimate method, and exploiting Project Management Information Systems’ fatal flaw.

 

Posted on: May 20, 2019 10:52 PM | Permalink | Comments (3)

Okay, Who’s Actually AGAINST Quality?

Well, I am, for one.

Don’t misunderstand – for any given good or service I procure, I want the highest possible quality version, and become quite frustrated when the things I buy end up failing in executing their purposes, either in effectiveness or lack of longevity. Like anyone else, I tend to avoid those vendors who sharply disappoint me, usually forever.

“So,” I can hear GTIM Nation say, “why do you claim to be against quality?”

Because of the way it has been shoe-horned into the Project Management codex, that’s why. Examples follow.

The first reason I find Quality Management experts highly off-putting has to do with their tools, especially the Ishikawa, or fishbone diagram. According to WhatIs.com, a fishbone diagram

… is useful in brainstorming sessions to focus conversation. After the group has brainstormed all the possible causes for a problem, the facilitator helps the group to rate the potential causes according to their level of importance and diagram a hierarchy. The design of the diagram looks much like a skeleton of a fish. Fishbone diagrams are typically worked right to left, with each large "bone" of the fish branching out to include smaller bones containing more detail.[i]

Note the use of the term “brainstorm” in the first two sentences – clearly it’s a large part of creating the diagram. But science is not consensus, and consensus is not science, not even management science. If you are relying on the subjective opinions of the members of your team in order to uncover some hidden truth or insight, you’re doing it wrong. And, as if that wasn’t bad enough, the “facilitator helps the group rate the potential causes according to their level of importance.”[ii] Rate the level of importance? Based on what, exactly? It’s yet another opportunity for subjective inputs to influence the direction of the analysis of the problem under investigation.

Then there’s the whole issue of identifying causality. I’ve been in my share of Six Sigma quality reviews, and never, not even once, heard the facilitator mention the distinction between proximate and material causes. The short answer is that a proximate cause is clear and direct, as in the first domino caused the second one to fall over by being, itself, tipped over. The material cause would be that the second domino fell over because it had been stood up on edge right next to the first domino, thereby qualifying for the “if-not-for” test. This may seem to be mere semantics at first glance, but the distinction in properly identifying remedies to the problems being investigated is huge. For example, one tactic in filling out the fishbone diagram is to ask “Five Whys.” An example I’ve used before involved the sinking of the Titanic. The cause of the Titanic’s sinking, of course, was that it hit an iceberg on the night of 14 April, 1912. One line of the “Five Whys” can go like this:

  • Why did the ship hit the iceberg? Because it couldn’t turn away from it in time.
  • Why couldn’t it turn away from it in time? Because the lookouts did not give the helmsman sufficient warning.
  • Why didn’t the lookouts provide sufficient warning? One reason was that they didn’t have access to the binoculars they would usually use.
  • Why didn’t they have access to their binoculars? Because they were locked away in a storage locker, and nobody on-board had the key.
  • Why didn’t anybody have the key? Because the person who had the key had been fired prior to the Titanic setting sail, and hadn’t handed them over when he left.

So, based on this “analysis,” some useful strategies in the avoidance of similar maritime disasters would include:

  • Don’t fire the person who has the keys to the binocular locker.
  • Or, if you do, make sure you get the keys back.
  • Alternately, you could ensure that multiple pairs of binoculars are carried on every voyage, and no two of them are kept in the same locker.
  • If, however, you find that the only pairs of binoculars are, indeed, in a locked locker, take an axe and destroy the door to the locker.
  • If your ocean liner company has a rule against axing open any locker, turn the ship around and go back to port where more pairs of binoculars can be procured (hopefully, quality ones [get it?]).

I could go on (and often do), but you see my point. Without a clearly articulated distinction between proximate and material causality, all the Ishikawa diagram does is help structure a conversation. It doesn’t really help identify the optimal Project Management strategy to either an experienced or looming PM disaster stemming from poor quality.

The other reason I’m opposed to the way Quality Management has been pushed onto the PM universe is due to the notion that those doing the pushing are somehow automatically afforded the intellectual high ground. Touching on the title of this blog, there’s a definite stigma attached to anyone who actually comes out and challenges the notion that Quality Management-types could ever be mistaken in their recommended strategies. But as I’ve been reminding GTIM Nation over the past few months, the old PM saw “Quality, Affordability, Availability: pick any two” is unavoidable. For the Quality guys to automatically assume that their favorite aspect of project delivery is a foregone conclusion strikes me as, well, lacking perspective.

Think about it: you don’t see Affordability or Availability getting their own sections in the PMBOK Guide®, do you?

 


[i] Retrieved from https://whatis.techtarget.com/definition/fishbone-diagram on May 12, 2019 at 17:14 MDT.

[ii] Ibid.

 

Posted on: May 13, 2019 10:22 PM | Permalink | Comments (1)
ADVERTISEMENTS

"Words are, of course, the most powerful drug used by mankind."

- Rudyard Kipling

ADVERTISEMENT

Sponsors