Project Management

Game Theory in Management

Modelling Business Decisions and their Consequences

About this Blog


Recent Posts

Not risk And Resilience, But risk OR Resilience

Debating Relevance With A Robot

Why AI Will Not Take Over PM. Probably.

The Ultimate AI Primer Came From … Reader’s Digest!

On Why PMs Get Stymied (Part II)

Not risk And Resilience, But risk OR Resilience

I can assure GTIM Nation that I am not bribing Cameron to set monthly themes to areas where I get to get up on my favorite soap boxes, even though the current theme of risk (no initial caps) and Resilience would seem to argue to the contrary. Back when I was writing the Variance Threshold column for PMNetwork magazine, I created something of a PM firestorm in 2007 with a piece entitled “PMBOK, Shmimbok”, where I walked through the (then-) sections of the PMBOK Guide®, and commented on whether or not I thought they should be included. After arguing for the inclusion of Scope, Cost, and Schedule, I (ironically) also supported the risk management (no initial caps) section, on the grounds that the risk events were being quantified in terms of cost or schedule impact on the baseline. And that’s when the fireworks started.

Some risk management specialists took extreme umbrage with my failing to identify “opportunity” as a “type” of risk, and asserting that it could be managed similarly. One or two of them sent emails to PMI® demanding my firing. My response to them was to point out that when looking up the word “risk,” out of Webster’s Third International, the Oxford English Dictionary, and even Wikipedia, none of them included the word “opportunity” in their definitions. It was at that point that I began to realize that risk management’s theoretical underpinnings were far weaker than I had previously perceived, and I started to perform some thought experiments using commonly-used risk analysis techniques on historical events. My favorite was the sinking of the Titanic.

Consider that, had a modern-day risk manager with no knowledge of Titanic’s fate been asked to calculate the odds, prior to embarking, that she would strike an iceberg and sink, the result would have almost certainly been miniscule. The North Atlantic is very big. The iceberg that sank the Titanic was not. And, even if such a risk manager were to calculate the odds of striking a 200-to-400-foot wide obstacle[i] in the middle of the 16,000,000 square-mile North Atlantic, the ship’s ability to navigate around mid-ocean obstacles would have reduced even that number to almost nothing, and doubtless too small to raise concerns.

So here is where we get into some really interesting risk management word play. The calculated odds of Titanic hitting an iceberg and sinking prior to her leaving Queenstown would have been extremely remote. Once she had hit, and four of her watertight sections began flooding, the “odds” would have moved all the way to 1.00, or certainty. Of course, the odds all along were 1.00, since that is what actually happened, but there would have been no reasonable way for our intrepid risk manager to have known that prior to the great ship leaving Ireland. Now let’s suppose our risk manager had near complete knowledge of all of the relevant parameters, including the existence of the iceberg, its course and speed (uneven as it must have been), Titanic’s precise location at every point along the trip, the lack of binoculars for the lookouts, the data on her poor turning behavior and inability to withstand damage to more than four watertight compartments before being doomed, the precise reactions and behaviors of the crew, etc., etc., all prior to her leaving Queenstown. With such knowledge, it may have been possible to calculate the odds of the disaster unfolding as it did to the point of over 50%, informing a life-saving decision to change course and/or slow down prior to 11:35 p.m. on April 14. Let’s theorize a data scale, with the information available in the real world (that resulted in the decisions that led to the sinking) on one end, and the information that would have had to have been available in order to push the probability meter over 50%, and point to the need for different decisions on the other. Clearly that level of accessible knowledge is impossible to attain, in 1912, today, and tomorrow. What can be known today, tomorrow, and in 1912 is that the inclusion of water-tight decks atop the watertight bulkheads would have made the Titanic more robust, and perfectly capable of surviving the iceberg strike that doomed her.

So, given the choice between heeding the advice of the risk managers (no initial caps) with their odds-of-occurrence assertions, and the insights of the more-robust-design crowd, it’s not a matter of “risk and resilience,” but risk OR resilience.

I know which set I would listen to.


[i] The estimated size of the iceberg that Titanic hit, retrieved from on September 22, 2023, 18:36 MDT.

Posted on: September 26, 2023 11:28 PM | Permalink | Comments (1)

Debating Relevance With A Robot

In the Artificial Intelligence (AI) crowd’s attempts at portraying it as performing similarly to the ways humans manifest intelligence, the ability to discover meaning is probably going to be the last aspect acquired by AI. But there’s a parallel in current PM initiatives, having to do with which information streams are essential, which are nice to have, and which are completely useless. In this respect, let’s compare and contrast the computer’s quest to discover meaning with the PM’s efforts to get a straight answer to questions like “This risk register helps me … exactly how?”

Hatfield’s Incontrovertible Rule of Management #22 reads:

All useful management information has the following three characteristics:

    • It is accurate,
    • It is timely,
    • And it is relevant.

The first two bullets can be objectively measured, but not the third. But the absence of a readily articulatable litmus test for which PM-oriented information streams are relevant and which are not has created a loophole in the PM codex large enough to drive a $28 Billion[i] (USD) clown car through. While not a complete test in and of itself, I think one excellent question which can help ascertain which is which would be: “Is this information actionable?” In other words, does the subject information stream provide the basis for making a decision about a particular strategy, its timing or implementation, or help with the selection of an optimal technical approach from among several options?

Let’s pivot to AI for a moment. Most people have seen some form of AI-generated art. Some of it is compelling, some of it is just creepy. What I believe is critical to keep in mind when it comes to AI-generated art is that no computer knows what color is, at least not the way humans do. When a computer generates a graphical image, it’s not selecting a color off of a pallet. It’s assigning the binary code sequence of what we humans perceive as color, and places it in a pixel that occupies a specific coordinate in an X-Y array. Depending on the extent that the program it’s running allows for guided (or even random) variance in color selection and placement, the results can be derivatives of already existing works, completely random, or anywhere in-between. If the outcome is attractive, it’s considered an AI success. If not, well, it’s usually relegated to the let’s-not-do-that-again bin, and a new iteration is initiated.

Meanwhile, Back In The Project Management World…

There are lots of analysts laying claim to the term “data scientist,” and many of them are swimming around in the PM world. But the litmus test for which information streams are relevant in that PM world and which are not has to be very different from the test for which AI-generated pieces of art are attractive, and which are not. Art is subjective – the relevance of PM information streams, well, (ahem) it is also subjective, but it shouldn’t be. The act of collecting data, using some methodology to convert it into usable information, and presenting that information in a way that decision-makers can make use of it isn’t usually cheap, or easy, at least not for the timely, accurate, and relevant stuff. The Director of a start-up PMO who spends her money and time developing a robust Quality Assurance capability, complete with fishbone diagrams and five-whys questionnaires, will be almost certainly out-performed by the one who uses a limited budget to create an Earned Value-based cost and schedule performance measurement system that can be actually implemented across the portfolio. Placement of the competing PM-oriented information streams within the scale between the extremes of completely superfluous and essential-for-PMO-success may be the most important strategy decision of PMOs everywhere.

Which all leads us back to the role of AI within PM. Can Artificial Intelligence aid in identifying which PM-oriented Management Information Systems (MISs) are relevant, and which are not? Well, yes, but only in the same sense that they can create works of art. Since AI can only “learn” through trial-and-error, there’s really no way for it to ascertain which information streams produce irrelevant output, much like it really has no way to “know” when its artwork is just plain creepy, unless a human sets it straight, and modifies the parameters of the simulation(s). Sure, AI could utilize Bayes’ Theorem to try and determine the strength of the connection between projects that have set up, say, a robust Communications Management protocol, and the subsequent successes of those projects, but assigning the values for the prior probability and the marginal probability would almost certainly be wild estimates.

In short, I’m fairly confident that the debate over which PM information streams are relevant and which are superfluous will not be settled any time soon. And AI can’t help here, either.


[i] According to Allied Market Research, this is the size of the risk management (no initial caps) industry forecast for 2027. Retrieved from on September 16, 2023, 20:17 MDT.

Posted on: September 20, 2023 09:39 PM | Permalink | Comments (0)

Why AI Will Not Take Over PM. Probably.

Before I walk away from the theme of Artificial Intelligence, or AI, I want to point out another one of the difficulties it has to overcome prior to taking over the world: it involves those strategies or tactics that require a specific sequence of decisions or choices to be made correctly in order for the whole scheme to succeed. As I pointed out last week, AI “learns” through trial-and-error. Like the Hexapawn Robot, if the programming employing an aspect of AI comes to a result that has been defined as a failure, it will remove the last decision made prior to the failure as an option, and launch the simulation again.

Now consider the chess tactic known as a “sacrifice.” This happens when a player will offer the opponent one of the pieces in order to secure a superior position, leading then to the counter-taking of more material from the opponent, or even checkmate. The combinations that include a sacrifice that also lead to a forced checkmate (or irresistibly superior position) happen as a precise sequence. However, if an AI application were to execute only the sacrifice portion of the sequence in the combination, it would likely present as a blunder, or failure, since it would appear to be a set of decisions that resulted only in the loss of material. Life in general and the business world in particular are filled with these kinds of scenarios, where any given strategy or approach to a given challenge becomes nonsensical if the particular tactics are taken out-of-sequence, or evaluated prior to their intended completion.

Then we also have the issue of assigning responsibility, for both success and failures. This is most often done by evaluating the sequence of events from the success/failure determination, but in reverse, and assessing the quality of the choices leading to that particular action. For example, if a bridge collapses, we don’t blame the bridge, but those who designed, built, and/or maintained it. Depending on the precise nature of the failure, the knowable aspects of the collapse are collected and binned in order to draw reliable conclusions about the nature of the failure, and who specifically is responsible. Often there are more than a single responsible party or event, with catastrophes typically being the result of an entire series of breakdowns. My favorite example, the sinking of the Titanic, had many such breakdowns, including:

  • The lookouts didn’t have binoculars, even though they were on board. Why not? Because the White Star Line had fired the purser who had the key to the lockers where the binoculars were stored prior to the beginning of the cruise, and nobody wanted to break into the lockers.
  • More well-known is the fact that there weren’t enough life boats on board. Why not? Several reasons, but the two that jump out are (1) there weren’t regulations in-place that required sufficient life boats for all passengers, and (2) with its water-tight compartments, it was believed that the Titanic was unsinkable, meaning that no life boats would then be necessary.
  • There are many more, but I’ll wrap with the fact that the aforementioned water-tight compartments weren’t topped by water-tight decks. After four compartments had been breached by the collision with the iceberg, the ship listed forward enough so that the water in the compromised sections simply spilled over into the ones aft, leading to its eventual sinking.

In short, very few decisions of import are made in isolation, or by a single person. They almost always impact other people’s decisions or circumstances, in ways that are impossible to foresee, much less quantify, thereby making any template- or algorithm-based solution untenable.

In the Titanic’s case, the review board found, in addition to the too-few-lifeboats problem, that the ship was travelling too fast for the icy conditions, and that its design made it more vulnerable than had been previously thought.[i] Note that, except for the ship’s speed at the time of the collision, none of the other causal factors could be attributed to a single person.

Now tell me that all of these parameters could have been identified and precisely quantified in such a way that an AI app, performing however many iterations of a simulated crossing of the Atlantic, could have suggested a usable alternative strategy. No database could have known the precise location of the iceberg, or the speed that Captain E.J. Smith would select (even with the available ice warnings), or that the fired purser had kept the keys to the locker with the binoculars, and on and on. Without the perspective of history, it’s clearly impossible.

In this respect, AI shares a flaw with the risk managers (no initial caps). It’s simply impossible to know all of the relevant parameters that go into assembling a strategy for attacking complex problems, much less quantify those parameters into an evaluating algorithm that could never fail catastrophically. Sure, AI can “learn” enough for robots to walk, run, or dance, and I’m fairly sure that, one day, they will be driving cars across a busy city at rush hour, and doing so reliably crash-free. But for discovering and executing strategies that require a very specific set of tactics to be employed in a very specific order, like Project Management, I have to believe that those decisions will remain with us humans.

For now.


[i] Retrieved from on September 9, 2023, 21:35 MDT.

Posted on: September 12, 2023 08:18 PM | Permalink | Comments (1)

The Ultimate AI Primer Came From … Reader’s Digest!

As is typical with science (particularly Management Science) trends, Artificial Intelligence, or AI, has received a lot of attention and material, and a significant portion of it is bogus. Some of the material I’ve seen is straight up laughable, particularly the idea that AI will end up controlling humans like some silicon-based, unavoidable tyrant. In my next blog I might explore how a PM-specific AI-based tyranny might manifest (it may not be that different from the current guidance-generating industry), but for now I want to focus on what AI is at its fundamental level, and why I’m not in a hurry to purchase AI-generated-dystopia insurance.

The Reader’s Digest Treasury for Young Readers (Reader’s Digest Association, 1963) is truly a treasure. Published sixty years ago, it’s full of really cool pieces – it was in this book that I first read a Sherlock Holmes story (The Adventure of the Speckled Band) – including brain teasers, puzzles, games, and projects, one of which deals directly with Artificial Intelligence. It’s there on page 176, in an article entitled “How to Play ‘Hexapawn[i],’” with instructions on how to build HER, the Hexapawn Educational Robot. And make no mistake – even in 1963, with personal computers not even being conceived as a practical possibility, HER represented true Artificial Intelligence. Here’s how it works.

Hexapawn is a simplified derivative of chess, played on a three square by three square board, populated by three white pawns and three black ones. Only two types of moves are allowed: the pawns may either move one square straight ahead to an unoccupied square, or it may capture diagonally. There are three ways to win: (1) by advancing a pawn to the third row, (2) by capturing all of the opponent’s pawns, or (3) placing your opponent in a position where he cannot move. To construct HER, you will need twenty-four matchboxes and some colored beads. On page 177 there’s an illustration of each of the possible 24 scenarios, with black dots representing the black pawns, and circles representing the white ones, on the nine-square board. Possible moves in each scenario are shown by colored arrows, and the HER always moves second.

To construct HER, copy the scenarios from Page 177, colored arrows and all, and paste each of them on top of one of the 24 matchboxes. Also place a black, blue, red and (for just a couple of the scenarios) green bead inside. Then make a move, and find the diagram of the position from the top of the matchboxes. Without looking, pull a colored bead from the box, and make the indicated move. Continue until either you or the HER has won. If you win, go to the last move/scenario that HER made, and remove that colored bead from the corresponding matchbox, eliminating that move from the available pool. In this way the HER “learns.” In the version described in the Treasury, the robot became a perfect player after losing eleven games. And this, GTIM Nation, perfectly and simply illustrates what AI is all about.

Consider: a machine can no more “learn” than 24 bead-containing matchboxes, at least not in the conventional sense. Ultimately, machines can only execute prior instructions, try random actions from a previously-defined set and eliminate the choices that led directly to undesirable outcomes, or perform some combination of the two. In a YouTube video entitled ”Open AI Broke Hide and Seek[ii],” the narrator describes how a simple digital version of the children’s game Hide and Seek was set up, with two bots being the “hiders,” and two being the “seekers.” The environment was a square room, with a smaller room in one corner, and two openings. Prior to any relevant bot behavior being observed, there were literally millions of games, with the failure of the bots to do anything “intelligent” being attributed to their “random” behavior. But that’s the whole point. Absent anything resembling real intelligence, the only way these bots could “learn” would be by playing the game and initiating some random move, arriving at an undesirable outcome, and then removing the losing choice from the repertoire, like HER did. That’s why it took millions of iterations for the Open AI application to happen across a workable strategy that any five-year-old could have ascertained within the first few instances of the game.

Don’t misunderstand – much insight can be gleaned from setting up a digital environment, and then having a program execute random decisions across multiple iterations in pursuit of the stipulated goals. Almost invariably some strategy will succeed that the programmers/designers never considered viable. But we are talking about trial-and-error here. Digital errors may have no consequences, whereas using this approach in the real world often has significant ones. Also consider that Hexapawn only has twenty-four possible scenarios, whereas the PM environments’ possible situations are, well, endless.

I want to close by reiterating … wait. I need to find the matchbox with “closing paragraph strategies” from my GTIM Educational Robot, and pull out a bead to see what to write next.


[i] Gardner, Martin, “How To Play Hexapawn,” Reader’s Digest Treasury for Young Readers, pp. 176-177.

[ii] Retrieved from on August 26, 2023, 20:12 MDT.

Posted on: August 30, 2023 08:48 PM | Permalink | Comments (1)

On Why PMs Get Stymied (Part II)

Last week I discussed some of the tactics that the Anti-Project Management crowd employs, and this week I’d like to review their strategies and motivations. Why do they act that way? I mean, let our friends the accountants announce a new module in the general ledger, say, payroll, and nobody even notices, even if it means the timecard entry process has just become harder than changing the password to your wireless router. But let someone introduce an Earned Value Management System (EVMS), and certain people start to scream like scalded howler monkeys. What gives?

Let me start with a caveat – if you are doing PM wrong, either in the characteristics of the system or its implementation strategy, then your opponents are on the right side of this conflict, motives notwithstanding. And while there are, no doubt, many reasons that would come into play in the execution of this thwarting-of-PM business, I believe there are three main drivers behind the PMO’s opponents. Here they are, in reverse order of severity.

  1. A bad experience stemming from a previous forced PM implementation. Since the late 1960s, various United States Government entities that rely on multiple contractors to supply goods and services have had several levels of requirements for the implementation of PM techniques, specifically Earned Value- and Critical Path-based systems. In their most vigorous phases, a team of auditors would evaluate the target contractor and issue findings on deviations from the guidance, needing time and energy to correct. Forced PM implementations tend to be difficult and expensive, and those involved will often come away with a negative impression of PM in general.
  1. It might be a passing fad. Do an internet search on “management fads.” What sets many of them apart is the amount of time, energy, and budget that organizations spent in pursuing the capabilities that these fads advertised as highly desirous, if not out-and-out necessary for survival, only to have them later revealed as only marginally beneficial. Of course, PM isn’t a fad, but the existence of hypothetical business model modifications that are tends to taint any management science initiative that hasn’t been widely accepted for over a century. And now, for the Number One reason your organization is trying to stymie your advancement of the PM Capability, I give you…
  1. Business Schools – Even Elite Ones – Have Been Teaching Slanted Theory For Decades. I have little empathy for accountants who complain that they have to put in excessive hours at work. Why? Because their mantra, which has made its way into every aspect of modern management theory, that the point of all management is to “maximize shareholder wealth,” has brought about this and other business model pathologies. After all, if the purpose of all management is to maximize shareholder wealth, shouldn’t you press for ever more output in the form of unpaid overtime from those very assets, as long as their variable costs do not go up? And it’s not just the accountants – every single salaried member of the staff is in the same situation. Quick organizational health indicator: if the staff is chronically overworked, with significant expectations of unpaid overtime hours, it means that that organization has bought into the “maximize shareholder blah blah blah” concept, and your time there is likely to be stressful.

This instance of management science reductionism stands in stark contrast to the raison d’etre of PM, with its focus on delivering the customers’ scope within the customers’ cost and schedule parameters. To engage in a bit of hyperbole, we don’t care if the assets are over- or under-worked, just that the scope is being accomplished on-time, on-budget. And, if it is, then we don’t worry if Project Team members aren’t putting in any overtime at all, or even (gasp!) taking the occasional afternoon off. As long as the client is happy, attempting to wring more (potentially excessive in addition to being unpaid) effort out of the Project Team is often counter-productive, in that it can lead to a decline in morale.  Of course common management theory holders are put off by the rise of Project Management’s codex. They are epistemologically inconsistent, and no narrative that insists on the supremacy of Asset Management over PM can remain intact if a successful PMO implementation demonstrates its worth. Such a PMO would establish that they, and their nominal approach to creating and maintaining the organization’s business model, are misguided.

Yeah, I know it’s dangerous to reverse engineer motives from observed behaviors, but I’m fairly confident that, if those opposed to implementing even the most basic of PM techniques were to be hit by one of those random flying sodium pentothal-tipped darts, and asked why they are opposed, they would admit to one of these three, or a derivative. Or, I suppose, they could actually be right in their opposition.


Posted on: August 17, 2023 12:32 AM | Permalink | Comments (0)

"If you're going to do something tonight that you'll be sorry for tomorrow morning, sleep late."

- Henny Youngman