Game Theory in Management

by
Modelling Business Decisions and their Consequences

About this Blog

RSS

Recent Posts

That’s Not How You Play The Game

Waiting For The Service To End

The Odds of Being Blue

An Opportunity (heh, heh) To Define Risk

Outsource, Or Die

That’s Not How You Play The Game

Since it appears that I may be the only risk management contrarian here at ProjectManagement.com, I can’t let my readers down when it comes to challenging and refuting the fevered rantings (strikethrough) proffered hypotheses of the other side. The last two postings have been pretty strong, in my humble opinion. This one will be stronger still, with my posting for August 29th being the ultimate risk management overturn. For the penultimate RM refutation, let’s turn to game theory.

Game theory is predicated on the idea that many real-world circumstances and situations are analogous to more controlled situations and circumstances, where the behavior of participants can be observed, quantified, and evaluated, with the superior strategies or tactics becoming discernible. Simply transfer those strategies and tactics to their real-world counterparts, and you’ve increased the odds of success, right?

Well, not always. As I discussed in my second book, (after which, incidentally, this blog is named) a favorite “game” of theorists is the Ultimatum Game. Here, two strangers, Player A and Player B, are approached by a person with an amount of money (typically $100 USD), who informs them that the money will be simply given to both players, but if and only if Player A, on the first attempt, proposes a split of the money that is acceptable to Player B. If Player A proposes a split that is unacceptable to Player B, then neither player receives anything.

The pre-game predictions of the theorists were fascinating. Their conventional wisdom held that, in order for Player A to maximize their payoff, the optimal strategy would be to propose that As receive $99, with Player B receiving just $1, on the grounds that, given the rules of the game, Player B would rather receive an unearned $1 than nothing at all, which would be the outcome should Player B reject Player A’s proposal. But something very interesting happened on the way to Player A depositing their $99 – in real-life iterations of the Ultimatum Game, Player B almost never accepted that division. There were, in fact, many observed instances of people rejecting 50-50 splits, or even 40-60.

In the investigation as to why the predicted outcome so dramatically deviated from the actual ones, several factors came in to play that hadn’t been anticipated.  For those that rejected the $99 to $1 split, common responses included a sense of resentment at being taken advantage of by Player A, or perhaps Player B simply didn’t believe that Player A was ninety-nine times more deserving of unearned money than Player B. Some individuals came from cultures that abhorred even the sense of being in debt to other people, and therefore rejected the 40-60 division in order to avoid being perceived as being in debt to Player A. The reasons for Players B to reject the division were myriad, but one thing was fascinatingly consistent: the 99-to-1 division, predicted to be the winning strategy, was the approach most often rejected. Faced with such a dramatic refutation of their calculated, “best” strategy, many game theorists simply attributed it to “cultural” factors which, obviously, could not be quantified or evaluated in deriving a superior strategy in the Ultimatum Game.

The parallels here with current risk management theory are striking. As I have often claimed in this blog (and in columns, keynotes, books…) there are simply too many factors involved in pursuing project goals to be able to capture, much less quantify, the data needed to calculate the odds and impact of future project-impacting events, much less calculate an optimal strategy for dealing with them. Compared to the number of decisions that customers, stakeholders, and the project team can and do make, the choices presented to the players of the Ultimatum Game are profoundly limited, to one choice each, and even here those ended up being two choices too many to quantify and calculate into a winning strategy. But, just as the game theorists could punt to incalculable “cultural factors,” so, too, can risk managers point to the existence of “unknown unknowns” when their techniques fail to deliver anything resembling a usable information stream. Convenient, no?

Keep in mind, then, if you are ever asked to participate in a rendering of the Ultimatum Game, don’t do as the so-called experts recommend, and propose the 99-to-1 split – it won’t work. Similarly, if you are playing the Project Management game, and your risk manager recommends a strategy based on his statistical analysis, don’t do it, because… well, you know.

Posted on: August 22, 2016 09:55 PM | Permalink | Comments (0)

Waiting For The Service To End

One of the definitions of faith is “firm belief in something for which there is no proof.”[i] It’s a well-known fact that many people, including project managers, believe that modern risk management techniques are a valuable addition to the information streams available to them. But is there any proof of this?

I wasn’t always a risk management skeptic, you know. In fact, at one time I was such a true believer that I wrote some software that pulled project data from a popular critical-path methodology software and performed a single-tiered decision-tree analysis of a project’s WBS (at the reporting level) in order to calculate the contingency budget. Its core formula looked like this:

Cb = ( ∑ (TA1B * OOTA1) + (TA2B * OOTA2) + (TAnB * OOTAIn) ) - PMB

Where Cb is the contingency budget amount, PMB is the Performance Measurement Baseline, TA1B is task alternative #1’s budget, OO is the associated odds of occurrence, TA2B is task alternative #2’s budget, etc. The task alternative’s budgets were estimates of the impacts of various things that could happen to that particular activity. Take their sigma, subtract out the original cost baseline, and you have a risk—based estimate of the amount that should be set aside for contingency.

Of course, to make it usable we had to auto-insert some assumptions. Since 68% of all data points fall within one standard deviation of the mean, we assigned those odds to the tasks’ original estimate. The most likely alternative outcome was given the nominal odds of occurrence of 27%, since that was the amount covered under the next standard deviation.  The fairly unlikely scenarios (worst case scenarios) got the next 4.7%, and the extreme outliers were – again, as preliminary place holders – assigned the last 0.3%. When interviewing the Control Account Managers to collect their insights as to the nature of alternative task outcomes and their impacts, we would always point out that the odds initially assigned were just boilerplate, and that they should feel free to alter them, based on their expertise and experience. Interestingly enough, they rarely did.

Something else really interesting happened during these data collection interviews. When asked about alternative outcomes to the planned tasks, the CAMs were clearly engaging their imaginations, and could admittedly only speculate as to the cost or duration of the alternative endings coming about. For those risk management fans who believe that Monte Carlo analysis provides a more robust (or even valid!) approach, something very similar happens here, too. Once the “most likely,” “best case,” and “worst case” scenarios are established, almost all Monte Carlo software packages for project management invite the analyst to select a distribution curve – usually a leaning bell – to serve as the bounding parameters for the “randomly-generated” alternatives’ cost and schedule impact. But then the exact same type of processing proceeds, just with however many “randomly-generated” additional data points.

To state the blindingly obvious, this is not management science. It is an invitation to speculation, tripped out in probability-and-statistics jargon. These speculations are not, repeat not based on hypothesis, testing, and validation or refutation. Instead, they are based on the projections of the various subject matter experts, or control account managers – in other words, they are largely faith-based, predicated on the idea that their current scope is sufficiently analogous to their previous work to generate an informed guess. Note also that most risk managers will insist that the risk analysis should continue throughout the projects’ life-cycle. Since alternatives that could impact the project are consistently popping up, this is not surprising; but it does point to a specific lack of finality that’s absent from the projects’ baselines. Scope, Cost, and Schedule, at some point, are all “frozen,” or, to one degree or another, contractually binding. Not risk, no siree. It’s almost as if contemporary risk analysis is more about dealing with an unquantifiable future than it is about processing actual data into usable information, the latter being an instantly recognizable aspect of management science, the former more in line with, oh, I don’t know, faith?

As long as I’ve gone this far, I may as well state the obvious conclusion: any system of beliefs that is largely faith-based is NOT management science, and far more closely resembles a religion. I’ve been hearing the faith-based message of the risk managers for decades now, and I’m more than a little sick of it. I’m ready for this service to end.


[i] Merriam-Webster on-line, http://www.merriam-webster.com/dictionary/faith, retrieved August 13, 2016, 14:18 MDT.

Posted on: August 15, 2016 10:59 PM | Permalink | Comments (2)

The Odds of Being Blue

Hatfield’s Uncontestable Management Theory #9 is that any notion that can’t be clearly articulated, both as to what it is as well as what it is not, cannot be taken seriously, or even legitimately evaluated. An old business axiom (not one of mine) is that which cannot be measured cannot be managed; similarly, inchoate ideas aren’t even weak hypotheses, much less usable management science.

Which brings us back to our friends, the risk managers, and their ongoing assault on the English language. Last week I took them to task over this whole risk-equals-opportunity nonsense, but that at least had the characteristics of simply trying to manipulate the language in order to support an otherwise silly notion. What I wish to address next represents a full frontal assault.

A principal aspect of risk analysis theory (strikethrough) suspect assumptions is that future events can be categorized as either “known unknowns” or “unknown unknowns.”  I can’t help but to wonder what the person(s) who came up with this nomenclature was thinking, since these categories are anything but intuitively obvious. The future is unknown, and unknowable by mere mortals. Ah, but the whole risk management facade comes crashing down in a heckuva hurry if this fact becomes widely accepted. They, therefore, come up with these two categories, which, like the risk/opportunity conflation, are self-evidently contrary, attempts to bully those who know the plain meaning of the terms notwithstanding.

Which brings us back to the marginal concept of “unknown unknowns.” What are these? It’s not overly cynical to say that these are events that the risk analysts did not anticipate, and therefore couldn’t estimate odds of occurrence nor cost/schedule impact. Pretty nifty, no?, to be able to completely mis-judge a management situation, and pass off the failure by categorizing it as an “unknown unknown.”

Then we have the issue of the repetitive-contrary-term phrase. The prefix “un,” in the English language, means “not.” By definition, a thing cannot be both “N” and “not N,” any more than something can be both “known” and “unknown.” The very fact that risk managers would attempt such linguistic legerdemain is itself, stark evidence for the poorly-thought-out nature of their ideas. As for this phrase’s cousin, it’s simply repetitive. The future is unknown. Saying that a specific future occurrence is “unknown unknown” is sophomoric repetition; it’s unknown – that’s all that needs to be said about it.

Imagine a project to paint a house blue. As the PM sends his assistant to purchase the selected paint, he finishes erecting the scaffolding and is approached by his risk manager.

“I want to talk to you about your project’s scope, and risk registry” he begins.

“Umm, okay, whaddaya got?”

“Based on previous projects’ performance, I think you should prepare for the chance that you won’t paint this house blue.”

“Why not?”

“Because your previous projects were to paint houses different colors, meaning that we should prepare for the outcome here to be either blue un-blue, or un-blue un-blue.”

“What’s ‘blue un-blue’?”

“More like a green.”

“That’s not blue.”

“It is if you mix it with yellow. That’s why I called it ‘blue un-blue.’”

“But my assistant has the exact color formula.”

“Then there are the un-blue un-blues, such as red.”

“We’re not going to paint the house red.”

“But you have to prepare for the chance of that happening!”

“It won’t ‘happen.’ If my assistant comes back with the wrong color – any shade of ‘un-blue,’ – I will send him back to get the right color.”

“But we should plan for the cost or delays associated with that happening.”

“No, we shouldn’t. How much am I paying you, again?”

And our PM didn’t even ask about the odds of painting the house un-blue un-blue.

Posted on: August 08, 2016 09:42 PM | Permalink | Comments (0)

An Opportunity (heh, heh) To Define Risk

One of the easiest aspects of modern risk management theory to blast to smithereens is their excessively irksome notion that risk management somehow includes opportunity management. (“How easy” you ask? Similar to betting on the Sun rising in the East tomorrow.) The evidence for such an assertion is non-existent. For example, consider the Oxford English Dictionary definition of “risk”:

  • noun 1 a situation involving exposure to danger. 2 the possibility that something unpleasant will happen. 3 a person or thing causing a risk or regarded in relation to risk: a fire risk.
  • verb 1 expose to danger or loss. 2 act in such a way as to incur the risk of. 3 incur risk by engaging in (an action).
  • PHRASES at one’s (own) risk taking responsibility for one’s own safety or possession. Run (or take) a risk (or risks) act in such a way as to expose oneself to danger.

Anybody see anything in there that represents a favorable time or set of circumstances for doing something?  Now let’s take a look at the definition of “opportunity”:

  • noun (pl. opportunities) 1 a favourable time or set of circumstances for doing something. 2 a career opening: job opportunities.

Anybody see anything here that even remotely implies exposure to danger? Me neither.

In order to further the assertion that risk and opportunity are similar, a premise that is self-evidently false, the argument must morph in to “Are opportunity management analysis techniques similar to risk analysis techniques?” Again, the answer is “no.” While they both have in common an attempt to quantify an unknowable future, they differ is what is unknown about that future. For example, risk responses include:

  • mitigation
  • deflection
  • contingency

And yet no one refers to mitigating opportunity, deflecting opportunity, or preparing a contingency in the event an opportunity presents itself, much less the odds of an opportunity arising. Nominally, opportunity is managed using tools such as the SWOT analysis; yet even here the opposite nature of risk and opportunity is clear.  Just as Strengths are assessed as opposites of Weaknesses, so too Opportunities are captured as the opposing piece to Threats, or risks.

What’s actually going on when the risk management aficionados assert an equivalency between such opposite terms is that they need to have an entity other than dictionary publishers push their poorly-developed ideas, since the notion that risk and opportunity management are closely related is self-evidently false. However, since the risk managers consistently claim to be able to, in some capacity, quantify the future, then it simply wouldn’t do to only capture the bad stuff. In order to support such a statement, they have to be able to foresee the good possibilities as well; otherwise their techniques will be even more representative of highly questionable management science hypotheses than they are already.

As I have oft stated in this blog, the future cannot be quantified with any precision, positive nor negative, opportunity nor risk. Doubling down on the very idea that they can do so by attempting to force a re-definition of these common phrases (“opportunity” has been around since the 14th century, and “risk” since around 1661[i] -- and about that: how can opportunity be part of risk if “opportunity” was around for more than 200 years prior to “risk”?) can hardly be called advancing the cause of management science.

In fact, that particular tactic more closely resembles something a person who was trying to deceive, rather than illuminate, would attempt.

 


[i] Definitions of Risk and Opportunity, Merriam-Webster Online, retrieved from http://www.merriam-webster.com/dictionary/risk on 1 August 2016, at 18:53 MDT.

Posted on: August 01, 2016 11:14 PM | Permalink | Comments (5)

Outsource, Or Die

Okay, maybe “die” is a bit strong. But consider: prior to the Ming Dynasty ((1368-1644), China was a technologically advanced and expanding world power. However, once Hong Wu founded the Ming Dynasty, China became a (relatively) closed society. Both trade and diplomatic ties were abruptly curtailed or halted altogether, and the Great Wall was completed in the North, symbolically (if not actually) sealing off China along hundreds of miles. By the time the Qing Dynasty began (1644), the West had attained marked superiority in military, economic, and agricultural technology. Ironically, anti-foreigner sentiments would linger for decades – the Boxer Rebellion (1900), for example, saw hundreds of foreigners killed[i].

Similarly, under the Tokugawa Shogunate, Japan entered into a 220-year long period of isolation, changed only when Commodore Perry returned to Japan in 1854. Prior to this period of isolation (“Sakoku”[ii]), Japan was relatively advanced in its economics and military technology. But by 1854, with trains commonplace in the West, a gift locomotive from Perry created astonishment among the Japanese people who saw it for the first time.

There are many other examples, but these two illustrate dramatically the dangers of organizations – not just nation-states – that attempt to create a narrative or environment where outside or alien ideas or concepts are dissuaded or prevented from being adapted. I believe this effect is scalable. Not only can isolationism dramatically affect entire societies (usually for the worse), the same effect can occur on an individual basis. As I have discussed in this blog previously, arrogance is not just off-putting on an interpersonal level – it prevents the person so afflicted from readily evaluating or adapting ideas brought to them by others. The more humble, then, are in a far better position to take advantage of new, novel solutions to problems that the arrogant one won’t even recognize.

In-between nation-states and individuals, we have corporations and project teams. How amenable are these to recognizing and adopting new ideas, versus holding tightly to a narrative that they have all the answers going in, and are therefore insulated to the outside realm of ideas? And what are some of the indicators that the specific corporation or project team is leaning towards one side of this scale, or the other?

Indicator number one is easy: it’s ProjectManagement.com’s July theme of outsourcing. Project teams especially are composed of people from several (if not many) different disciplines. The larger project teams will almost always include some subcontractors, and it is not at all unusual for very large project teams to include personal combined from companies that would otherwise be rivals. Beware, therefore, the large project team composed entirely of people from one organization. Such teams are almost certainly allowing business model pathologies to change the optimal approach to completing the project’s scope on-time, on-budget. A sub-level clue of this effect occurs when there is a notable, yet artificial, hierarchy among the members of the project team based on their membership in the various organizations within said team.

Indicator number two is a bit more difficult to ascertain, but is a dead giveaway nevertheless. It’s cronyism/nepotism. These two very similar business model pathologies are clear indicators that the organization that is indulging them has walked away from (if not abandoned altogether) a merit-based structure for evaluating and placing talented personnel in their most appropriate roles on the project team. In a way, cronyism is the anti-outsourcing alternative. Instead of giving work to those outside the home organization because the recipients can do the job better, faster, or cheaper, the organization instead gives the work to the recipient simply because they are internally favored, with little or no true consideration of performance.

Finally, if you, yourself, are not an M.D., and you don’t outsource your own medical care, you will, in all probability, die sooner than you would otherwise. So this blog’s title isn’t really all that outlandish after all.

 

 

 


[i] Retrieved from History of China, http://www.chaos.umd.edu/history/toc.html, on July 23, 2016.

[ii] Sakoku. (2016, July 5). In Wikipedia, The Free Encyclopedia. Retrieved 20:06, July 23, 2016, from https://en.wikipedia.org/w/index.php?title=Sakoku&oldid=728508502

Posted on: July 25, 2016 10:13 PM | Permalink | Comments (0)
ADVERTISEMENTS

"Intelligence is not the ability to store information, but to know where to find it."

- Albert Einstein

Test your PM knowledge

ADVERTISEMENT

Sponsors