Game Theory in Management

Modelling Business Decisions and their Consequences

About this Blog


Recent Posts

The Virtue of Not Seeing The Reposing Camel

Don’t Be Tormented By Talking Felt Pigs

Beware The Cost Of Success

Mary Sue Saves The Project… Again

I Suppose That Depends On Your Definition Of Success

The Flying Buttresses of Success

According to a history of architecture website named, in the twelfth century

Somebody (nobody knows who) invented the flying buttress. Instead of the buttress being stuck to the side of the building, it would form an arch leading away from the building.

The flying buttress would start from the places at the top of the wall where the groin vaults were directing the weight of the roof. From there, the flying buttresses would carry the weight of the roof away from the building and down a column of stone to the ground. It wouldn’t matter what the walls were made of anymore, because they wouldn’t be carrying the weight of the roof.[i]

Other examples of flying buttresses in building interiors appear as an unsupported column top, with its load-bearing utility not being intuitively obvious. Essentially, the flying buttress allowed the weight of the structure to be diverted away from the walls.

Another act of diversion, this one pertaining to human behavior, has to do with the oft-used cliché, that success has many parents, but failure is an orphan. I know when I hear that phrase cited my first notion of its meaning has to do with a somewhat jaded view of human nature, that we’re imminently susceptible to crafting or altering a narrative to make ourselves appear to be a vital part of a winning team, but hardly involved at all if the same (in this case, project) team crashes and burns.

But taken together with another axiom, this one a quote from Charles Kettering, “99 percent of success is built on failure,”[ii] and a larger insight emerges. It goes without saying that, if we don’t own our mistakes, we never learn from them. But what happens when we glom on to successes where we did not provide the material cause, much less the proximate cause, of the favorable outcome? I think we see this very outcome on-display at so many PM-themed conferences, seminars, podcasts and guidance documents, that of attribution of success to some very questionable management science hypotheses.

Meanwhile, Back In The Project Management World…

Take, for example, the practice of time-phasing an Estimate to Complete (ETC). This rather silly practice has the full-throated approval of that guidance-generating-organization-that-must-not-be-named. It assumes several premises, including:

  • The ETC is derived from re-estimating remaining work (on a line-item level, no less) on an already-started project, instead of using the simple, accurate formula ETC = Estimate at Completion (EAC) – cumulative actuals;
  • Useful or relevant information can be gleaned from comparing budgets to actuals, and
  • Staffing decisions are based on differences between budgets and actual costs, rather than the nature of the remaining scope on the project.

These assumptions are suspect at best, and completely fraudulent at worst, being, as they are, poorly (or completely un-) supported by repeatable observations in a setting that would allow isolation of the dependent variables upon which the assertion depends. In other words, there is absolutely no evidence, not even a single instance, of a project owing its success to its ability to time-phase its ETC. These assertions are simply arrived at by a bunch of self-anointed experts, who publish their findings opinions with vague but impassioned support phrases such as “Doing X is key to project success.”  Really? Can we see the data that supports that assertion? Even a single example of how X is “key” to a specific project’s success?

And the time-phased ETC example is but one of many. Enhanced analysis techniques in risk management, communications management, quality management … the list goes on and on, with no hard data supporting the assertions, but with the ubiquitous “…is necessary (or “key,” “essential,” or “central”) to project success” contained in a nearby paragraph. It’s maddening, really.

Here’s the painfully-obvious-to-the-most-casual-observer essential element of project success: execute the scope. All of the analysis techniques inherent in Project Management have the singular function of reflecting the project teams’ performance as they execute the scope. Some of those analysis techniques are truly “key,” with Earned Value and Critical Path methodologies popping to mind. But many of these others are not, and not only that, they actually divert time and energy away from that load-bearing component of project success, executing the scope. And, to that end, they represent the very opposite of the things that are “essential” to project success.

I kind of like comparing those pushing these unsupported and unsupportable assertions about what is “key” to project success to flying buttresses. It sounds like I’m calling them a juvenile and semi-profane name, when I am, in fact, referring to them as an architectural feature. So, it is in that spirit that I can say to these “experts,” metaphorically speaking, of course, and sincerely: stop being flying buttresses.






[i] Retrieved from on December 3, 2017, 9:55 MST.

[ii] Retrieved from on December 3, 2017, 8:12 MST.

Posted on: December 04, 2017 10:32 PM | Permalink | Comments (9)

What Government PMs Really Do Not Want

A basic characteristic of Project Management which is often given short-shrift is the aspect of being particularly applicable to unique work. By definition, projects are unique, and have a definitive beginning and end, which occurs when the documented scope has been attained to the customers’ satisfaction. But new software, buildings, and devices are being created all the time, right? And, in most cases, these projects deliver an improvement or advancement over the thousands of software packages, buildings, and devices already in existence, while still sharing many (if not most) of their predecessors’ characteristics. So, yeah, the project is unique in many ways, but it’s also very much like hundreds, if not thousands, of projects that went before, which is where the performance goals of such work tend to be derived. And this is where a lot of government PMs get really frustrated.

Consider some of history’s greatest and truly unique projects, like the Manhattan Project, or the Apollo space missions. Yes, bombs, and rockets that could carry people into space, already existed, but these were different. No explosive had ever harnessed the power of the atom, and no space vehicle had ever come close to allowing an occupant to step foot on another heavenly body. Past the attainment of the primary scope, what could be used to evaluate these projects’ performance?

The unfortunate tendency here is for those outside the PM structure to invoke performance parameters from the asset managers’ realm, i.e., the accountants, with the Return on Investment (ROI) being prime. There is a rather funny story about a time when Manhattan Project scientists were creating a device that needed large amounts of copper, then in short supply due to the war effort. However, silver had similar electrical characteristics, so the scientists asked Undersecretary of the Treasury Daniel W. Bell for 6,000 tons of silver bullion. Bell responded “Young man, you may think of silver in tons, but the Treasury will always think of silver in troy ounces!”[i] A quick calculation later, and the request was amended to 430,000,000 troy ounces of silver, whereupon it was granted.

Now, put yourself in the shoes of a government oversight manager. Could you even begin to calculate the ROI of 6,000 tons of silver being diverted to industrial purposes? Even if you had the security clearance to know the precise nature of the science behind the request, it would be quite impossible to capture the value of contributing to the radical altering of the balance of geopolitical power that would result from the successful development of atomic weapons.

Something similar happened within the Apollo Program. Pushing mass into outer space is very expensive, and the need to miniaturize and lighten the electronics needed to reach the moon became a priority. The solution came in the form of the integrated circuit. As put by Sharon Gaudin at the Computerworld website,

The development of that integrated circuit, the forbearer to the microchip, basically is a miniaturized electronic circuit that did away with the manual assembly of separate transistors and capacitors. Revolutionizing electronics, used in nearly all electronic equipment today. While Robert Noyce, co-founder of Fairchild Semiconductor and then Intel Corp. is credited with co-founding the microchip, Jack Kilby of Texas Instruments demonstrated the first working integrated circuit that was built for the U.S. Department of Defense and NASA.[ii]

The total cost of the Apollo Program was reported to the United States Congress in 1973 at $24.5B (USD). Since integrated circuits, and their progeny the microchip, are used in virtually all computers today, what can be said of their ultimate “value”? Microsoft is worth $483B[iii], Google is worth $101.8B[iv], and Amazon is worth $430B[v], and these are just three examples of prominent computer-based enterprises. None of these organizations would be in existence if not for widespread use of personal computers, personal computers which would not be in existence if not for the technology that brought us integrated circuits and microchips. The government program to put a man on the moon in the 1960s would radically alter the world’s economies, to the point that the United States’ $24.5B investment—as hefty as it must have been perceived in 1973 – has to be seen as comically small in terms of the economic benefits it eventually wrought. Just the three companies cited above represent a 745% return, adjusted for inflation. But just 10 years after Apollo 11 landed on the moon, a Gallup survey indicated that only 41% of Americans thought the benefit of landing on the moon outweighed its cost.[vi] Really.

So, what is it that government PMs really do not want? They don’t want their truly unique projects’ performance to be evaluated unfairly. All things fail by irrelevant comparisons, and newly discovered technologies, by definition, are, at least to some degree, incomparable.

If nothing else, can we at least stop pretending that the Return on Investment figure has a place in evaluating project performance?

[i] Manhattan Project. (2017, November 20). In Wikipedia, The Free Encyclopedia. Retrieved 04:46, November 26, 2017, from

[ii] Retrieved from, 20:05 MST, 25 November 2017.

[iii] Retrieved from, 20:12 MST 5 November 2017

[iv] Retrieved from, 20:14 MST, 25 November 2017

[v] Retrieved from, 20:17 MST 25 November 2017.

[vi] Retrieved from, 18:19 MST, 26 November 2017.

Posted on: November 27, 2017 09:20 PM | Permalink | Comments (3)

What Government/Contractor PMs Really Want

I think that there’s a major management science issue inherent in managing projects for the government – any government – but I have never seen it addressed in the PM periodicals. It has to do with the nature of the real customer, and how far removed they are from the actual transaction(s). I’ll explain by way of example.

If I decide I want to buy a product or a service, I’ll do some research on the desired good or service, and will generally do more research as the anticipated price goes up. By the time I initiate the actual transaction, I’m spending my own money – based on how much I have budgeted (or can afford), balanced against the level of quality or capability I have determined best for my uses. In this scenario, waste is minimized – I have set the budget, and I have determined the parameters for a successful acquisition, so I’m in a position to let the competition among suppliers determine the best price for my target. This kind of purchase is known as a first-party transaction.

In those instances where I am spending my own money, but for someone else’s benefit (gift-giving, for example), the price is still important to me, but I’m a little less concerned about the “perfect fit” aspect of the transaction. In those cases where I’m buying a gift for a family member who has been very specific about what they want, I do my best to accommodate them, assuming their very specific request fits within my budget. Similarly, if I am the recipient of a product or service that I'm not paying for (like being given a gift), then I’m keenly interested in the quality of the product or service, but a bit less concerned about its price. These are known as second-party transactions, and they have an increased opportunity for waste or abuse since the spending versus expected quality parameters are not as precisely aligned as in first-person transactions.

Finally, in those instances where I’m neither the person paying for the product or service, nor the recipient or consumer, the opportunities for waste or abuse are magnified further due to the additional mis-alignment of an exact budget figure versus a clear picture of the precise nature of what is to be delivered. These are known as third-party transactions. Much of what most Western industrialized governments spend their money on falls in this category, as in government-furnished food, housing, or health care, and, predictably enough, much fraud, waste, and abuse is present in such programs.

Meanwhile, back in the project management world…

For managers in charge of government projects, such transactions tend to fall into the second-party category. Our government customers don’t actually pay for the projects out of their own pockets (“Do you have any idea how many points I get if I pay for that aircraft carrier using my Discover Card?”), but instead act as representatives for the actual buyers, those citizens who pay taxes. They can be counted on to become highly adversarial if the project looks like it’s going to bust its approved budget, but will usually be okay with the cost as long as that does not happen. Generally speaking, the customer will not be the actual person using or consuming the final product or service, but they are typically very closely related to those who do, and can be counted on to be extremely interested in the quality of the product. So, if the cost doesn’t mean as much as the delivered product, what can we expect our government customers to demand from their project managers? If you said “as much quality as they can wring out of the approved budget,” go to the head of the class. Because of this effect, the management effect most often associated with government representatives or PMs is scope creep (as opposed to a pathological focus on efficiency), since they seek the best value for their (somewhat fixed) budget.

Conversely, contractor PMs are responsible for delivering a certain level of quality at a (somewhat fixed) budget, so their worries are two-fold: if they believe that the product or service being delivered is consistent with the quality standards depicted in the Statement of Work (SOW), but the customer disagrees, then a lot of churn can be expected in the execution of the project. The other issue is whether or not the project can be delivered on-time, on-budget. If it can, then everybody has a good day. But, if it can’t, the contractor PM must determine if the fault is with the efficiency or performance of the project team, or if the government PM has added scope informally, i.e. engaged in scope creep. If they have, and will admit to it, then a Baseline Change Proposal fixes the problem. But, if they have engaged in adding scope informally without owning up to it, or if the problem lies with the project team’s performance, the only available remedy is to try to tap any reserves (e.g., Management Reserve, or Contingency) that may be available. If no reserves are available, well, then everybody has a bad day.

So, here’s the central focus of optimal government project management: well-defined scope. For the government PM, poorly-defined scope could allow an unscrupulous contractor to deliver sub-standard quality, while making a strong claim to having fulfilled the vague terms of the SOW and, therefore, the whole of the available budget. From the contractor’s point of view, poorly-defined scope could allow opportunistic government PMs to informally add upgraded quality or reliability standards that the contractor hadn’t planned nor budgeted for, leading (almost automatically) to the overrun condition discussed in the previous paragraph.

It would all be much simpler if we could just give Naval Officers Discover Cards with really large limits…


Posted on: November 20, 2017 08:59 PM | Permalink | Comments (10)

More Likely Than…?

According to The Daily Beast, there are a whole lot of things that are more likely to happen to you than winning the lottery, specifically, the American lottery MegaMillions (Note to my readers: I’m not shilling for MegaMillions, I’m just using them to point out the folly of Gaussian Curve aficionados). How these odds are calculated is not provided, but the following are supposedly more likely than winning a jackpot:

  • Death by vending machine
  • Airline-related terrorist attack
  • Having identical quadruplets
  • Becoming President of the United States
  • Dying in an asteroid apocalypse[i]

…among others. Keeping with my deep-seated skepticism about anything statisticians assert, I did a little digging on my own. Consider:

  • Since MegaMillions began in 2002, there have been an average of 14.7 jackpot winners per year.
  • Death by vending machine: 2 -3 per year[ii]. This appears to be less likely.
  • Airline-related terrorist attack: none in the United States since 11 September 2001. Ditto.
  • Having identical quadruplets: I couldn’t find definitive numbers for the United States, but experts estimate there are currently 50 sets worldwide[iii]. In order for this to be more likely than winning the lottery, the sample set would have to be only 3.57 years long, and that’s worldwide. I’m calling this less likely, as well.
  • Becoming President of the United States: 1 every 4 years, at most. I mean, seriously, how does anybody conclude that something that happens .25 times per year is more likely than something that happens more than 14 times per year?
  • Dying in an asteroid apocalypse: zero incidents in recorded history. Compared to 219 MegaMillions winners total, I’m thinking that the whole asteroid apocalypse thing is just a bit less likely than winning the lottery.

Again, I have no idea how these odds were computed, but verifiable performance reveals that these computations were, well, wrong. I mean, seriously – death due to asteroid apocalypse is more likely than something that’s happened 219 times since 2002? Do these people even read what they’re asserting?

I think the reason that The Daily Beast’s published assertions were wrong has to do with the nature of calculating possible future outcomes. A “fair” coin – one that’s perfectly balanced, I guess – when flipped, will come up heads half the time, and tails the other half. Even here, though, causes me to wonder: how hard could it be for a patient person to learn to flip a coin in such a way as to lead to a result where the flipped coin would land the same side up as when it was positioned for flipping? I imagine it wouldn’t be easy, but it probably isn’t impossible. And, if this, the ultimate in binary-outcome randomness, can be influenced, what scenario can’t?

Meanwhile, Back In The Project Management World…

As I’ve maintained for years now, the future cannot be quantified, therefore the optimal project management strategy for any given situation cannot be calculated. The projected outcome from individual decisions may (or, realistically, may not) be estimable, but that’s not the same thing as calculating an optimal strategy. Even within the confines of game theory, the strategies that lead to maximizing a given participant’s payoff rarely translate to supposedly analogous real-life situations. But that doesn’t stop risk management experts from claiming that they can do so, no siree. If you think about it, though, there’s really nothing to stop them. When they are transparently wrong, they can always point to the occurrence that upended their “analysis” as being from the realm of the “unknown unknowns.” Convenient, huh?

Now, I am aware that no risk management specialist worth the title claims to be able to predict the future with usable certainty. They will maintain that they are simply identifying risks that may negatively impact their organizations – until they aren’t, and insist that “opportunity management,” or “upside risk,” is part and parcel of risk management, even though those aspects of the word “risk” appear nowhere in any reputable dictionary.

So, to the two or three remaining risk management specialists who read this blog, I have some advice for you. When generating an estimate of the things that are, say, more likely to happen to a person than winning the lottery, you might want to start with one critical data point: how many people tend to win the lottery? And, if it averages out to 14.7 people per year, seek out statistics on things that happen more often than that. Does this sound like common sense? Well, it is. Does it also sound simple? It most certainly is not, for it represents a paradigm shift in the way risk management is conducted. The implication is that the future, to the extent that it can be grasped in any meaningful way, is best projected based on past performance. Project-ruining occurrences tend to happen to project teams that historically perform poorly, no matter the identified and “quantified” cause. No amount of hyper-ventilating analysis based on Gaussian curves can change that.

Now, if you’ll excuse me, I think I'll go buy a lottery ticket.






[i] Retrieved from, 11 November 2017, 13:46 MST.

[ii] Retrieved from on 13 November 2017, 18:45 MST.

[iii] Retrieved from on 13 November 2017, 18:46 MST.

Posted on: November 13, 2017 10:03 PM | Permalink | Comments (8)

PM Technology Is Advancing – Are We?

From an article entitled “Less Work For Mother,”:

In the early 1960s, when synthetic no-iron fabrics were introduced, the size of the household laundry load increased again; shirts and skirts, sheets and blouses that had once been sent out to the dry cleaner or the corner laundry were now being tossed into the household wash basket. By the 1980s the aver- age American housewife, armed now with an automatic washing machine and an automatic dryer, was processing roughly ten times (by weight) the amount of laundry that her mother had been accustomed to. Drudgery had disappeared, but the laundry hadn’t. The average time spent on this chore in 1925 had been 5.8 hours per week; in 1964 it was 6.2.[i]

Meanwhile, back in the PM world of the 1980s…

I’m old enough to remember a time when, for contractors working for the United States Department of Defense, our cost and schedule performance reports would be typed (on an IBM Selectric®, no less) onto a printed form from the Government Printing Office. Of course, all of the calculations for the time-phased budget (Budgeted Cost of Work Scheduled, or BCWS), the Earned Value (Budgeted Cost of Work Performed, or BCWP), and actual costs (Actual Costs of Work Performed, or ACWP), as well as the associated indices and percentages, were done by hand, calculator (no, not the kind you had to hand-crank – don’t be ridiculous) or, on occasion, on one of the new-fangled spreadsheet programs (Lotus 1-2-3® was the favorite, although Quattro® was coming on strong. Microsoft Excel® wouldn’t become the go-to application for another decade[ii].). It was, obviously, somewhat time-consuming, but we consistently delivered our reports on-time, and somehow managed to bring in our projects (mostly) on-time, on-budget.

Fast forward to the 21st Century, and the very idea of using a typewriter is absurd, much less for generating formal project performance reports. Both desktop computers and Project Management software applications have advanced significantly, so we’re all processing more and better PM information, right? Well, much like the change in clothes-washing technology meant more clothing was being cleaned, the new hardware and software capabilities are processing much more data.

But is it better information?

Meanwhile, back in the PM world of 2017…

Certain guidance and procedure-generating organizations that I won’t name keep sending out missives on the subject of what constitutes more advanced PM information, much of it silliness. A few examples include:

  • In the Critical Path Schedule, one of them actually proposes a limit (a very low limit, at that) for the number of activities that can be logically linked via a start-to-start relationship. What if the nature of the project work is such that that’s the most appropriate way of logically linking those activities? Tough.
  • A similar limit is mandated suggested for finish-to-finish relationships.
  • There’s also a lot of churn on the amount of float in the schedule network, again regardless of the match of the specific project’s work with the logical links among its activities.

Each of these is asserted due to a supposed vulnerability of project schedules that have multiple start-to-start, finish-to-finish, or high-float elements to hiding or camouflaging negative variances. I have never – and I do mean never – seen any hard data supporting this assertion. It seems to always based on hypothetical scenarios.

Then there’s the ultimate data collecting and crunching waste, that of comparing the project’s time-phased budget to its actual costs. There was a time when anybody asserting that this is an appropriate (much less necessary) PM analysis technique would have been immediately recognized as a hack. But now, some guidance-generating organizations are actually calling for it to be performed, and even amped up: this analysis must now be executed at a level of granularity all the way down to the line-item level in the original basis of estimate as it compares to the line-item level in the general ledger. The reason why this analysis is provably useless at higher levels, but suddenly takes on the patina of legitimacy when it’s performed at a sufficiently detailed level, is not provided.

Now back to the 1980s…

It’s analogous to our 1930s American housewife, having access to 1980s clothes-washing technology, electing to run the exact same clothes through the washer to the dryer, and then simply loaded them back into the washer, without expanding the amount of laundry actually being cleaned. In order for the act of cleaning clothes to have any value whatsoever, the clothes in question must actually need cleaning. Similarly, in order for all of this advanced data processing capability to have any significance whatsoever, the resulting analysis must be relevant. Using advanced data processing capacity to deliver irrelevant data isn’t advancing Project Management science. It is, in fact, wasting time and energy when so many of the existing techniques, which have been shown to work (like the calculated Estimate at Completion), are being elbowed aside.

Rather than embrace these “advanced” techniques, we would be better served making more use of the IBM Selectric®.




[i] Retrieved from, November 3, 2017, 20:36 MDT.

[ii] Microsoft Excel. (2017, November 1). In Wikipedia, The Free Encyclopedia. Retrieved 02:43, November 4, 2017, from

Posted on: November 06, 2017 10:09 PM | Permalink | Comments (5)

"I never thought much of the courage of a lion-tamer. Inside the cage he is at least safe from people."

- George Bernard Shaw