Game Theory in Management

Modelling Business Decisions and their Consequences

About this Blog


Recent Posts

Yes, Absolutely, Burn Henrietta

Hey! Who Put This Venn Diagram In My Pudding?

The Electric Kool-Aid Certification Test

The Problem With Oval Race Tracks

Some Lessons Are Never Learned

Yes, Absolutely, Burn Henrietta

There’s a classic scene in the 1956 movie, Around the World in 80 Days, where English aristocrat Phileas Fogg (played perfectly by David Niven) is on the last leg of his epic journey, and finds himself in the mid-Atlantic on board the tramp steamer Henrietta. Due to storms and adverse winds, the Henrietta runs out of fuel well short of England, and, if Fogg can’t make it back to the starting spot of his journey on-time, he will lose his wager, and can expect a future of poverty. Realizing his situation, he makes a bold decision: he purchases the steamer on the spot, and then orders all flammable materials on-board that do not directly contribute to the ship’s navigation capabilities to be broken up and fed to the furnaces that drive the steam system.

As various wood planks and assorted burnable pieces of the Henrietta arrive in the ship’s engine room, the ship’s first officer, played by Andy Devine, laments their loss. But, when the wooden carved figure from the bowsprit arrives, the first officer cries aloud “No, not Henrietta!” (presumably, the nickname of the bowsprit itself.)

Alas, Henrietta the Figurehead herself is fed to the furnaces, and the ship makes it to port soon enough for Fogg to complete his journey on-time, and collect on his bet.

What does this have to do with Project Management? Well, I’m glad you asked.

There have been multiple efforts of late, both in private and public sector PM circles, to develop something called “Earned Value light,” or an Earned Value Management System that is not overly encumbered by superfluous techniques or processes, and delivers cost and schedule performance information with minimal cost and fuss. I think the existence of such initiatives is ipso facto evidence that those I have previously referred to as “processors” have gained ascendency in the project management world, and have made something as basic and straight-forward as a working EVMS into an extremely difficult endeavor. It may be too late to turn back their insidious encroachments into legitimate management science, but I mean to make the attempt by asking: what, exactly, is an EVMS for? What’s its purpose? And, if that question can be clearly and cleanly answered, the next one is: which current practices attached to the set-up and maintenance of Earned Value Systems are superfluous, and should be dissuaded, if not out-and-out abandoned?

Here's my take: Earned Value Management Systems exist for one purpose, and one purpose only: to put into the hands of the project’s decision-makers the cost and schedule performance information they need to make, well, informed decisions.

That’s it.

Michael, Do You Realize What You’re Implying?

Yes, I’m crystal clear on what that definition implies for the answer to the follow-on question. For, if my definition is right, it follows that these practices have nothing at all to do with Earned Value:

  1. The creation of a time-phased Estimate to Complete,
  2. Comparing actual costs at the line-item level to their counterparts in the cost baseline’s Basis of Estimate (BoE),
  3. Performing a “bottoms-up” Estimate at Completion,
  4. Mandating that an extremely accurate and recent Master Resource Dictionary be used in time-phasing the budget,
  5. …among many other practices and techniques that have been larded onto essential EVM, with the blessings (if not insistence) of so-called “experts.”

However, some prominent procedure or guidance-generating organizations have pushed these add-ons relentlessly, asserting outright that to avoid them is to fail to “do” EV correctly, which I hold to be so much rubbish.

It essentially boils down to this: put yourself in Phileas Fogg’s shoes, except in this case, rather than come up with the criterion needed to get the S.S. Henrietta to her destination (reminder: burnable with no connection to the ship’s ability to navigate), you need to come up with the criterion for producing usable cost/schedule performance information with an absolute minimum of effort, time, and cost. Which “must-have” aspects of EV would you abandon? Which would you keep? From my point of view, get rid of anything – anything – that did not directly support the collection of a basic time-phased budget, actual costs at the reporting level of the Work Breakdown Structure, and a reliable estimate of the tasks’ percent complete at the end of the reporting period, again at the reporting level of the WBS. For example, from the list above,

  1. …serves to help improve resource allocation (theoretically – I’m not convinced, but that’s what its adherents claim),
  2. …is supposed to improve the quality of the original estimates,
  3. …comes from sheer ignorance of the accuracy of calculated EACs and the inaccuracy of re-estimated ones, and
  4. …is also supposed to improve the quality of original estimates.

None of which, obviously, have anything to do with delivering cost and schedule performance information to PMs and other decision-makers. Everything else – everything – is superfluous, and should be abandoned.

Even Henrietta.


Posted on: July 24, 2017 10:12 PM | Permalink | Comments (5)

Hey! Who Put This Venn Diagram In My Pudding?

Next to the overuse of the word “like,” the most irksome trend in modern English usage has to be, in my opinion, the phrase “the proof is in the pudding.” It may be a case of confirmation bias on my part, but this idiotic phrase appears to be everywhere. The original version, which dates back at least to 1605, has the virtue of actually making sense: the proof of the pudding is in the eating, meaning that the cooks can argue about the proper way of making a pudding all they want – the true test will come when the dish is actually consumed.

So, how did the highly irksome version of this saying come into existence? I could find no authoritative analysis, though the consensus appears to be that the original was shortened to “the proof of the pudding,” which was then mis-heard or mis-applied into its grating iteration. I mean, seriously, would those who use this phrase simply stop and think about it for just a moment? When I took a college-level course in logic, the only “proofs” the professor was interested in was the depiction of the premises and conclusions of an argument in a Venn Diagram. Okay, so if the “proof is in the pudding,” did somebody put a Venn Diagram into the pudding prior to its being set? And, if so, wouldn’t it be hard to read afterwards? In my book, the use of the idiotic version of this phrase automatically renders whatever the writer has to say highly suspect, if not entirely worthless.

Virtue Signaling? No, this is Stupid-Signaling

Think I’m ranting (again)? Consider the real-life example I’ll pull from when I was instructing new, usually young Project Controllers in the art and science of cost and schedule performance systems. Just prior to the module on preparing the Variance Analysis Report (VAR), I would stop the presentation and walk over to the classroom’s white board, and write the word “his.” I would then call on one of the students, and ask them to read what I had written. After receiving the answer, I would ask, “Is this misspelled?”


I would then write the word “hers,” and call on another student with the same two questions. I would continue in this vein with “ours” and “theirs,” each time receiving the same answer to the question “Is this misspelled?”

I would then write “Its,” and the classroom would usually laugh a little, realizing where I was going with this.

“You laugh,” I would respond, “but I will guarantee that, if I were to perform a quality check on everybody’s VARs one year from now, I will find at least one instance of ‘it’s’ being used instead of the real impersonal possessive pronoun. For the record, ‘it’s’ is a contraction of ‘it is,’ which is something most people are supposed to learn by fifth grade, at the latest. It may seem trivial, but I will also guarantee that, if your customer sees this error in your VARs, they won’t be impressed with your analytical or reporting ability. In fact, they’re likely to think whatever you have written to be highly suspect, so, please don’t make this error.”

Back in the Project Management world…

…we have to deal with quite a few very basic errors in assumptions about management science that are similarly entrenched in the common business consciousness, but just as infuriatingly wrong, and irksomely so. Take the very basic definition of the term “cost variance.” According to many websites (I’ll pick on[i]), a cost variance is “the differences between the planned budget and what was actually spent.”

Ummm, no, it’s not (see how easy it is to use to “it’s” correctly?).

A Cost Variance, as any certified PM readily knows, is the difference between the Earned Value and the Actual Costs. A Schedule Variance is the difference between the Earned Value and the budget, or “planned budget.” In reality, comparing the budget to the actuals, as commonplace an “analysis” as that is, is virtually worthless. Consider the following payoff grid:



Real Negative Cost Variance

Real Positive Cost Variance

Positive Spend Variance

You’re doing poorly, but think you’re doing great (does it get any more dangerous from a management point of view?)

Coincidentally Accurate

Negative Spend Variance

Coincidentally accurate

You’re doing great, but think you’re doing poorly


In half of the possible outcomes, comparing budgets to actual costs returns the right answer, but only coincidentally so. However, in the other half of the instances where such a comparison returns bad information, one of those outcomes would lead you to a false sense of security, or project well-being, when the opposite perception is called for. And even if you don’t become lax with your decision-making, real problems are going unaddressed, which usually leads to them becoming large (or even fatal) ones. You’re almost literally better off flipping a coin, and making decisions based on that outcome.

And that, perhaps, is the greatest benefit of a certification. It transmits to potential employers and customers that its holder knows the valid versions of common management analysis techniques, and will eschew the more popular, but irksome versions.

And knows how to spell the impersonal possessive pronoun.

And never says “the proof is in …”, well, you know.


[i] Retrieved from on 16 July 2017, at 11:29 MDT.

Posted on: July 17, 2017 10:31 PM | Permalink | Comments (1)

The Electric Kool-Aid Certification Test

I’ve worked with an organization that had the habit of acting like the next manager being brought in was some kind of Tom Wolfe-esque microeconomic rock star, whose name should have been instantly recognizable to the hoi polloi. After building up this person relentlessly, this person would finally arrive to take up their new position, and…

Well, nothing.

Nothing spectacular, no new ways of approaching the problems the project team faced, no novel implementation strategies to known workable tactics, nothing. They wouldn’t even bother figuring out how the current systems operated. Oh, they would crank up the energy level for a time, and get into meetings where they would strongly emphasize trite and clichéd axioms. But as far as actually advancing a capability, they were able to deliver only a whole bunch of nothing.

Not. A. Thing.

And Then, After Fleeing to Mexico…

Eventually these “superstar” managers would be re-assigned, and another one breathlessly announced, and the cycle would begin again.

After witnessing this cycle repeat a few times, I began to notice something that all these new managers had in common: none of them were certified.

A few weeks back I blogged about the difficulties inherent in working for organizations where loyalty, not talent, was considered the key attribute for the staff to demonstrate. The pathologies inherent in such an organizational approach are legion, but one of the key indicators that an organization is so situated is that they do not place a premium on professional certifications. In those cases where an apathy is associated with the attainment or holding of a relevant professional certification, and it is combined with a laxness about educational requirements, then the odds that the organization is even remotely based on a meritocracy are low, indeed.

Which kind of puts the whole certification impetus upside-down. I mean, when I was pursuing my certs, it was in an effort to signal to current and future employers that my level of expertise was more advanced than my non-certified competitors and colleagues. And almost exclusively that’s how the attaining-your-certification industry advertises its worth, by promising to make the successful cert candidate, well, more successful in the long run. But what I have observed is that it’s the winning organizations that will attract certified project managers, project controllers, and cost estimators in a way that their more poorly-functioning competitors do not.

Who’s being tested here, anyway?

To be precise, I am not saying that those organizations that make it a point to send their employees to certification training and reimburse them for their certifications are going to automatically begin to out-perform their competitors. What I am saying is that a fairly reliable sign that an organization is at least somewhat merit-based in its hiring and promotion practices will tend to attract more highly educated and certified employees, and that those dysfunctional organizations, where loyalty is the coin of the advancement realm, well, won’t. Indeed, one of the best ways to prevent a given hiring manager from filling a key position (or any position, really) with an underqualified – or even inept – candidate, who just happens to be an old friend, is to strictly enforce standards for even being interviewed for the position. In the project management world, educational requirements (past the attainment of a Bachelor’s or Master’s Degree) can be tricky – they don’t call PM “the accidental profession” for nothin’. Successful PMs come from extremely varied areas, including computer science, engineering, the hard sciences, or even business, among others. But a PMI® certification – that’s something different. That communicates that, regardless of background, its holder knows a thing or two about managing projects. Because of this, an unintended consequence of including it as a prerequisite is that it can function as a brake on cronyism, at least in its most blatant manifestations.

So, all this time, when you saw a requirement for a PMP® just to apply for a project management job, you thought they were testing you. In reality, the whole time they were advertising their own suitability as an organization worthy of your talents.

Posted on: July 10, 2017 10:57 PM | Permalink | Comments (3)

The Problem With Oval Race Tracks

I was once in an independent cost evaluation (ICE) meeting when one of the customer representatives, a young woman who presented with quite the attitude, challenged the project controls coordinator about an element in one of the Earned Value reports.

“Why doesn’t the cumulative Earned Value amount equal the actual costs?” she demanded.

“Because that’s not what Earned Value is” came the response.

The rep exploded. “Do you know PMI®?!” she stormed. “I’m a PMP®!”

As if that was supposed to trump the rules of basic cost performance management.

Which simply goes to show one of the dangers inherent in certification, the theme for July. Although obtaining a certification such as the PMP® can be quite the challenge, what with the assembling and presenting of education and professional credentials, as well as passing the test, it’s not an end-point. It’s really more like the starting line which, like an oval race track, has the starting and finishing lines at the same place.

Don’t believe me? Consider that medical doctors were first being certified in the United States in the early nineteenth century. At that time it was considered perfectly acceptable for certified doctors to do the following:

  • Prescribe narcotics for very young children as a “remedy” for their acting like, well, very young children.
  • Cut off part of the tongue for stutterers.[i]
  • Inject paraffin wax in order to smooth out wrinkles[ii] (will future M.D.s view our use of Botox with similar abhorrence?).
  • Bloodletting was considered a viable treatment for a wide variety of ailments.

I could go on, but not without getting kind of gross, but you see the point. Certification, in and of itself, is no guarantor of advanced capability or expertise. It only means that the certified person is up-to-speed on what the current state of the technology happens to be at that time.

So, in the next century, which project management practices will lead those with PMP® numbers in the tens of millions to look back and wonder “what were they thinking?” Well, what do the now-considered absurd medical practices have in common? They weren’t based on the scientific method (with the exception of giving narcotics to young children. I have no doubt they worked as intended, based on measurable and observable data. They just didn’t properly consider the long-term or side effects). Not only were the absurd practices not based on the scientific method, they were apparently based on a few experts’ opinion, or speculation.

Now consider: which PM practices currently in vogue are not based on observable, quantifiable data and analysis, but are instead predicated on group speculation, or opinion? A quick distinction is in order: feedback data is factual in nature, made up of observations of those things that have already occurred. Conversely, feed-forward data isn’t really data at all – it’s someone’s idea about what should be expected in the future, usually based on that person’s experience. As my regular readers know, usable management information must have three characteristics:

  • It must be accurate,
  • It must be timely, and
  • It must be relevant.

Leaving aside the relevance discussion for the moment, feedback-type data is accurate (if it’s been collected properly), but often suffers from not being timely enough, due to how long it takes to collect and present. On the other hand, feed-forward data is timely, in that it looks to the future. It is, however, notoriously inaccurate, since an accurate look into the future is considered so rare as to be usually attributed to divine inspiration, if not intervention. A whole bunch of highly subjective assumptions must come about for any feedforward data to be of use.

So, this being the case, can we use the rubric of scientific-method derived theories, based on objective data, providing the basis for the ideas we embrace, while rejecting the subjective, speculative ones?

While that works for me, I doubt it will work for the PM world in general. For if we use the next-to previous sentence as our litmus test, we’ve just obliterated the majority of risk management, communications management, some portions of human resources, and even a little bit of quality management. Each of these disciplines have their advocates, who can be expected to push back on any attempt to diminish or eliminate their favorite notions.

Which brings us back to our racing oval. Congratulations on getting certified! It’s a wonderful thing. You are now set to more critically examine and engage the current thinking in the project management sciences. Oh, by the way, that finish line you just crossed? It’s also a starting line.

So let’s get started.





[i] Retrieved from on July 3, 2017, at 8:30 a.m. MDT.

[ii] Retrieved from on July 3, 2017, 8:32 a.m. MDT.

Posted on: July 03, 2017 07:59 PM | Permalink | Comments (2)

Some Lessons Are Never Learned

I’ve worked with some truly inspirational and brilliant executives over my career, and have spent considerable time thinking about what that highly disparate group of people had in common. As different as they all were in age, background, education, culture, and personality, they did have the following three attributes in common (which, taken together, would become Hatfield’s Rule of Management #12):

  • They genuinely cared about their people, either on their project teams or in their line organizations (if you are reading this to glean tips on how to advance into executive management, and you don’t really care about your people, then do us all a favor and enter a profession that does not require managerial leadership to advance. You can only fake this attribute for so long).
  • They consistently figure out the optimal technical approach to the problems facing the organization, or else readily recognize when someone else has done so. This attribute is a tricky mix of expertise and willingness to accept input from others, as evidenced by the payoff matrix below:


Wrong Approach

Best Approach

People Tell You It’s Right

You’ve surrounded yourself with sycophants

It’s good to be recognized

People Tell You It’s Wrong

Your people know their stuff, and should be heard

These others are incompetents or Jungle Fighters; you best be sure of the analysis that leads you to this approach.


Note that the only instance in which the so-called communications management people and their unending urging of “engaging all of the stakeholders” represents an appropriate response is the bottom-left-hand payoff scenario. In another instance such urgings are superfluous; for the remainders, they’re flat-out wrong.

  • Once they have arrived at the optimal technical approach to either pursuing the project’s scope or resolving a technical issue, they execute their strategies with passion. I like to use the example of American World War II General George S. Patton, (I’m pretty sure) should he be parachuted in to Northern Europe by himself in 1943, would not have waited around for the rest of the 3rd Army to arrive before he started attacking Nazis.

Of course, one does not have to be a good manager to be successful, and even really good managers will experience failures at some point. But when it comes to maximizing the odds of the successful outcome of a given endeavor, the people who demonstrate these three attributes can be expected to consistently out-perform those who do not. And it is those who do not whom I wish to discuss today.

I have noticed a consistent trend among those managers who are either weak or void in the first two attributes I listed, caring for their people and arriving at the optimal technical approach. Recall Hatfield’s Rule of Management #11: the 80th percentile best managers who have access to only 20% of the information needed to obviate a given decision will be consistently out-performed by the 20% worst managers who have access to 80% of the information so needed. I believe that most of the managers who have a hard time discerning the optimal (or even sufficient) technical approach to a given problem know this, deep down inside, and will present as constantly hungry for information. The competent manager knows the precise nature of the information streams needed for project success, and how to set them up and keep them running. The manager to avoid will flail in a sea of data, some necessary, some irrelevant, and not know which is which.

This brings me to a list of the symptoms displayed by the managers who don’t know the optimal technical approach to their jobs, and really don’t care about their people. One sure-fire tell that you are dealing with such a manager is that they will insist on setting up and maintaining a database of the things that they need to pay attention to. Such systems have a variety of names, such as “Action Item Tracking,” or “Issues Management” systems, among others, but their purpose is the same: to inform the sub-optimal manager of the things that need their attention. Don’t get me wrong – institutional issues management systems can and do provide a vital information stream to execs several levels removed from the projects’ trenches. No, the systems I’m talking about seem to be always predicated on a model that includes a central repository of data, surrounded by input/output nodes. This “poll” is, of course, an invalid management information system structure, since it has absolutely no ability to differentiate between subjective or objective data, nor information that is feedback-based as opposed to feedforward. It also has the drawback of displacing legitimate systems, such as those predicated on Earned Value or Critical Path methodologies.

Add to the existence of these invalid information systems a tendency to call multiple meetings designed to simply check on what everybody is doing (or else, more insidiously, some confidant spying on what everyone is up to), and there can really be no doubt: the person you are dealing with can’t benefit from any legit lessons learned analysis, since they have a poor concept of which information streams are relevant to the successful pursuit of the technical agenda at hand.

So it all comes down to this: if a given manager is incapable of discerning relevant and irrelevant information in the here-and-now, why should they suddenly be able to discern it from lessons learned reports?

Posted on: June 26, 2017 10:56 PM | Permalink | Comments (4)

I don't have a good apartment for an intervention. The furniture, it's very non-confrontational.

- Jerry Seinfeld