Game Theory in Management

by
Modelling Business Decisions and their Consequences

About this Blog

RSS

Recent Posts

Hatfield’s Project Management Maturity Model, Part III

Hatfield’s Project Management Maturity Model, Part II

The Hatfield Project Management Maturity Model, Part I

OPM3 As A Tool Of Revenge

Eeek! There’s A Mouse On My Cutting Board!

Hatfield’s Project Management Maturity Model, Part III

Does It Work?

In discussing the development of management maturity models, I would like to steer the conversation towards how such things should be approached in the first place. As I discussed in Part I of this series, the traditional, academic approach to this sort of research is to launch a project, collect members who see themselves as subject matter experts, get them to agree to the least objectionable set of premises possible, assemble those premises into a document, have the document peer-reviewed, and then (finally) a new theory is introduced and advanced. But does that approach really work?

Meanwhile, Back In The Real World…

I think that two of the most significant advances in project management in the last half-century came from the 1989 book Managing the Software Process by Watts Humphrey[i], and the 1982 book In Search of Excellence, by Tom Peters and Robert Waterman. While their conclusions are now pretty standard fare in business schools across the country, what I think is remarkable about these two books is the way their authors conducted their research. They did not follow the traditional model, nor its close cousin of developing a hypothesis, and seeking data that would support their conclusions. Instead, they (and their researchers) went out among functioning organizations that were succeeding, and noted what they all had in common. After a sufficiently broad data gathering effort, they focused in on which shared characteristics of the winners were relevant, and drew their conclusions from this data set. Note how different this approach is to the typical one – Humphrey, Peters and Waterman sought out what worked, and used their advanced grasp of causality to capture the source; conversely, the traditional approach is to collect ideas, and then get a sufficient number of so-called experts to agree to base their assertions on the collection.

A Return To The Unreal World

Managing the Software Process would lead to the now-famous Capability Maturity Model, developed by Carnegie Melon University’s Software Engineering Institute (SEI), which I believe is a valid model. However, in 2002 a “successor” to the original CMM was released, the “Capability Maturity Model Integration,” or CMMI®. This effort was to “improve the usability of maturity models by integrating many different models into one framework.”[ii] Naturally, the authors appear to have returned to the traditional approach, which rendered, in my opinion, the CMMI® inferior as a management science theory to its predecessor, the original CMM. It’s actually kind of ironic, the whole effort to integrate many different models into one framework, seeing as how virtually all of those many different models were very similar to (if not out-and-out derivative of) the SEI version in the first place.

And Now, For the Actual HCMM!

Actually, ladies and gentlemen, it doesn’t exist. Having one overarching structure that lends insight into how to influence a limitless number of project teams to perform better is pretty much beyond me. In many circumstances I like the original SEI CMM, but even it has its limitations, some of them quite stark. I believe that, if I were to be put in charge of developing a capability maturity model (by a paying customer), I would begin with these steps:

  • Narrow the scope. Software project teams will use a very different PM tool set than construction, and aerospace projects would hardly be recognizable to PMs from the pharmaceutical industry.
  • Remove any and all self-identified experts from the team, and retain only bright, well-educated researchers.
  • Once we know the specific industry, find out as much about their projects, both successful and un, as possible from available records, particularly performance and change control information.
  • Categorize the projects’ outcome as “Exceeds,” “Meets,” or “Fails to Meet” original project parameters.
  • Only after these steps are complete would I attempt to interview members of the projects’ teams, the lower in rank the better. Major questions:
    • What went right?
    • Why did that succeed?
    • What went wrong?
    • Why did it go wrong?

Then I would interview the projects’ principals and consultants (if any), to see what the official story was, and contrast that with what the rank-and-file had to say.

From this data set I would begin to identify common factors among the winners and losers, and propose a structure to be published and peer-reviewed.

Did I mention the avoidance of any self-identified experts?

Lagniappe

My third book was released last week, and is available here. It’s entitle The Unavoidable Hierarchy, Who’s Who In Your Organization And Why, and it’s about how people tend to form and move up and down within social structures, including those in the corporate world. If you are a manager in charge of a team, it might be worth a look.


[i] Capability Maturity Model. (2016, September 19). In Wikipedia, The Free Encyclopedia. Retrieved 19:28, September 24, 2016, from https://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&oldid=740240845

[ii] Capability Maturity Model Integration. (2016, August 22). In Wikipedia, The Free Encyclopedia. Retrieved 20:02, September 24, 2016, from https://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model_Integration&oldid=735671761

 

Posted on: September 26, 2016 08:27 PM | Permalink | Comments (1)

Hatfield’s Project Management Maturity Model, Part II

I Wonder Where Ruth Is

In furthering my particular PM3, the casual reader may have noticed something peculiar about my approach: I’m ruthless when it comes to abandoning tradition when it comes to refining the model. In last week’s blog I made the case for jettisoning risk management, human resources, procurement, communications, and quality from consideration in my model (except for specific circumstances), leaving only scope, cost, and schedule from the original PMBOK Guide’s® table of contents. It just so happens that these three management areas, taken together, are commonly known as the “triple constraint,” the basis for all other project management information systems.

Maybe She’s Out Trying To Find Answers

I’ve often stated in this blog that the 80th percentile best managers who have access to only 20% of the information they need to obviate a given answer will be consistently out-performed by the 20th percentile worst managers who consume 80% of the information so needed. So, when discussing structures or models that represent how organizations or project teams move from poor to superior performance, my particular application of the Pareto Principle becomes relevant. Management information rarely comes easy (or cheap), particularly since it must satisfy three requirements:

  • It must be accurate,
  • It must be timely, and, most of all,
  • It must be relevant

…which pretty much leaves off those areas of PM “expertise” that I’ve abandoned as part of my project management maturity model. The number of “stakeholders” engaged is irrelevant. For the project manager, the total compensation package for employees is irrelevant (a huge deal for asset managers, sure; but, for the success of a given project, not so much). The entries in a typical risk register are both irrelevant and inaccurate. The information streams created and maintained, then, for the production of these kind of datum are a waste of time, energy, and expertise.

Conversely, the precise nature of the scope being pursued, and the cost and schedule performance of the project team cannot be undervalued. These items are hugely relevant, and the task before the project teams’ analysts is to deliver this information in an accurate, timely manner.

And Yet, Ruth Is A Person

As tempting as allowing the Hatfield Project Management Maturity Model to rest on the comparative epistemological value of management information systems may be, the simple fact remains that project teams are made up of people, informed and otherwise. It’s a fact that some people on the project team will be more motivated than others, and, in sufficiently large teams, it’s a near-certainty that at least one member of the team will be working against the overarching goals of the project, particularly if it means their own personal advancement or advantage. Even the seminal work in capability maturity models, the CMM© from Carnegie Melon University’s Software Engineering Institute, was based on the observed behaviors of the subject project teams. For the SEI Level 1, the members of the team were not cooperating. For SEI Level 2, they were cooperating, at a basic level. At SEI Level 3, team cooperation was no longer based on a few individuals influencing them to do so, and so on. My take is that things like universally-used forms or the documents associated with their common training are artifacts of this cooperation rather than causes or drivers.

So, if project team-wide cooperation is the name of the game, how is that attained, or expedited? Game theory can provide some insights here, which will be covered in Part III.

Lagniappe

My third book was released last week, and is available here. It’s entitle The Unavoidable Hierarchy, Who’s Who In Your Organization And Why, and it’s about how people tend to form and move up and down within social structures, including those in the corporate world. If you are a manager in charge of a team, it might be worth a look.

 

Posted on: September 19, 2016 09:27 PM | Permalink | Comments (1)

The Hatfield Project Management Maturity Model, Part I

Why Is This One Special?

When it comes to generating and then perpetuating new hypotheses in the management science realm, I’m sure it will shock my readers to learn that I take a rather opposite tack from the conventional approach. Take management maturity models (please!).

The conventional approach is to issue a call for papers and/or volunteers to come together in some kind of forum, either real or virtual, and thrash around the ideas that are largely considered to be valid on the particular topic. A certain consensus forms around those ideas that are least obnoxious to the simple majority of team members, and those ideas are then arranged into some kind of structure. These structured ideas are then documented, revised, refereed, and published under the auspices of the sponsoring organization. Unless the documents are met with widespread scorn, they will probably become the basis for further analysis, and perhaps even a certification. This last is the commercialization phase, where the sponsor organization finally receives some form of return for all of their trouble.

Why Breaking With The Standard Scholastic Model Is A Good Idea

Here’s my heartburn with this process: if it advances management science, it does so only coincidentally. It mostly advances policy. Consensus isn’t science, and science does not depend on consensus, period. Science only needs one researcher who happens to be right, and can reproduce results in an experimental setting. Of course I’m aware that, in the Management Science “laboratory,” i.e. the business environment, it’s impossible to isolate and test specific causal factors, which makes the pure science aspect of this whole thing extremely difficult, but stay with me. By the time the idea generators and their reviewers are in consensus-garnering mode, it’s too late. Management science may or may not be furthered – but “optimal” management policy most certainly is. So, how would I do it better?

First off, no seeking of consensus, especially not from hundreds or even thousands of project management experts. PM types are notorious for a whole bunch of them in a room being unable to agree on the color of an orange.  I understand that this approach automatically excludes any results from being considered for ANSI Standard inclusion or approval, which is usually considered to be THE basis for legitimacy, but that’s okay. If I’m wrong, my hypotheses shouldn’t be considered legit; and, if I’m right, they eventually will. Instead, I would identify just one person to propose a structure, conduct the research, make the arguments that either establish or overturn the structure, and publish it.  If just one person is a bridge too far (too short?), then one person should head a team of researchers, not opinionated experts weighed down with the baggage of their decades-old experiences.

Reality, Or Bust

Next, I would blast to smithereens any PM concept that couldn’t be defended using hard data. Back when I received my PMP®, the PMBOK Guide® was divided into eight sections. The Hatfield Management Maturity Model (HM3) is based on those categories, should they deserve consideration, so:

  • Scope Management is obviously central to PM, and the only hard evidence needed here is the fact that a Google search for “how much is spent in construction project claims?” returns over 136,000,000 hits. Clearly defining the scope up front is essential to all other aspects of PM. Verdict: Valid.
  • Cost Management is also essential to the whole PM process. Indeed, the organizational pathology of indicating to a customer that “it will take as long as it takes, and cost as much as it costs” was one of the key drivers in the widespread acceptance of project management as a discipline. Verdict: Valid.
  • Schedule Management  -- see the second bullet. Verdict: Valid.
  • Risk Management – I have yet to see an objective study that indicated that the risk management process was a central information stream in successful project completion. As I have often pointed out in this blog, the future cannot be quantified, even with Gaussian curves. Verdict: Invalid.
  • Quality – here, I have a simple question. Is your product or service getting criticized and rejected over quality issues? If so, bring in a quality expert. If not, then don’t. Verdict: It depends on the project.
  • Communications – some of this is helpful, e.g. developing a so-called zipper plan that defines whom within the project team communicates with counterparts in the client’s organization. But all this stuff about “engaging stakeholders”? Again, there’s no evidence that any of that enhances the odds of successful project completion. Verdict: Invalid.
  • Human Resources almost never resides within the project team. Verdict: Invalid.
  • Ditto with procurement. Indeed, since procurement so obviously falls within the realm of asset management, I would argue it never had a specific PM role in the first place. Verdict: Invalid.

This streamlined version of an earlier PMBOK Guide® table of contents will serve as the basis for the Hatfield Management Maturity Model, with more particulars coming in next week’s post.

Posted on: September 12, 2016 10:18 PM | Permalink | Comments (2)

OPM3 As A Tool Of Revenge

First, A Little History

I think the proliferation of capability maturity models (CMMs) owe their popularity to Carnegie Melon University’s Software Engineering Institute (SEI), which performed a study on how software companies mature in their ability to successfully produce computer programs. The analysis became part of the book Managing the Software Process by Watts Humphrey[i], published in 1993, that divided software engineering firms into five levels:

  • Level 1 (“Initial”), where everybody is basically doing their own thing (or nothing) with respect to the sought-after capability,
  • Level 2 (“Repeatable”), which is very basic but standardized, so the entire organization is performing at the same level of expertise,
  • Level 3 (“Defined”), is the Level where, if the heroes who got the organization to this point are hit by the proverbial beer truck, the organization’s capability level doesn’t decline because there are sufficient procedures and training in-place to perpetuate that level of expertise,
  • Level 4 (“Managed”) organizations are good enough to export their expertise to others organizations, and
  • Level 5 (“Optimized”) organizations are so good at what they do that they regularly discover solutions to long-standing problems.

Have We Seen This Before?

I was struck on how similar these “Levels” were to Tuckman’s stages of group development[ii], published in 1965, specifically that project teams go through a four-stage progression, of Forming – Storming – Norming – and Performing. If the SEI model assumes the teams have already formed – they did, after all, examine existing organizations – then Level 1 equals Storming, Levels 2 and 3 represent Norming, and Levels 4 and 5 are marked by their superior performance. If we’re talking about assessing organizations by the performance of their personnel resources by categories or Levels, then the two structures, in my opinion, are remarkably similar. SEI simply adds more detail. Don’t misunderstand – I’m not accusing anybody of plagiarism, or idea-lifting. It’s just that the practice of defining categories of organizations by certain criterion (“There are two kinds of people – those who divide people in to two categories, and those that don’t”) and then using those criterion as a measuring standard isn’t new.

If this is the case, then consider some of the implications. Currently, most college-level business schools teach that the point of all management is to “maximize shareholder wealth.” Agree or disagree with me that this is clearly silly, there can be no valid argument that this approach is against the Project Management raison d’etre, which is to manage work in such a way as to satisfy the customers’ expectations/parameters of scope, cost, and schedule. But that never stopped the asset management crowd from trying to tell the PM aficionados how to do their jobs, based on the analysis they derive from data in the general ledger. There are actually several books on how to calculate the return on investment (ROI), a favorite analysis technique of the general ledger geeks, on setting up a Project Management Office. For generations the asset managers’ tools have been intruding into project management space, where they most certainly do not belong.

Take That, Finance and Accounting!

But along come the Organizational Project Management Maturity Model. And what does it do? It essentially points out that corporations that have many projects feeding into several programs becoming part of a portfolio of work can have optimal (and, by extension, sub-optimal) organizational structures. The step-by-step ideological payback follows this path:

  • Organizations are made up of resources.
  • Resource management is not Project Management.
  • PM, however, provides the metric by which organizations succeed or fail in portfolio management space.
  • Basically, we PM types have turned the tables on the Asset Managers, by redefining (correctly) how organizations achieve success (or are viewed as failures), and on our terms.

This form of epistemological payback was a while in coming, but it’s said that revenge is a dish best served cold.

Lagniappe

My next book is coming out later this month. It’s entitled The Unavoidable Hierarchy; Who’s Who In Your Organization, And Why, and it’s available here. It’s about how people tend to fall into specific roles, or ranks, in the organizations in which they participate, and an analysis of the tactics used to advance within those hierarchies. My former PMNetwork columnist colleagues Neal Whitten and Bud Baker both gave great reviews of the manuscript, leading me to believe it might be worth its price (at least on Kindle!).


[i] Capability Maturity Model. (2016, August 28). In Wikipedia, The Free Encyclopedia. Retrieved 16:55, September 5, 2016, from https://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&oldid=736597555

[ii] Tuckman's stages of group development. (2016, July 22). In Wikipedia, The Free Encyclopedia. Retrieved 17:08, September 5, 2016, from https://en.wikipedia.org/w/index.php?title=Tuckman%27s_stages_of_group_development&oldid=731055786

Posted on: September 05, 2016 09:26 PM | Permalink | Comments (0)

Eeek! There’s A Mouse On My Cutting Board!

There was a funny German comedy show that featured a skit where a woman and her elderly father are in the kitchen when she asks him “So, papa, how did you like the iPad we got you?” The dialogue is in German, but it’s clear that he’s been using the iPad as a cutting board, as he uses a knife to scrape sliced vegetables into the container in front of his daughter off of the iPad, and then proceeds to rinse it off and put it into the dishwasher. The look on the daughter’s face is priceless, as she reacts to a comically bad misuse of the device.

Speaking of comically bad misuses of devices, on the Wikipedia page for risk management[i], there’s a graphic of the International Space Station, with the areas most at risk from debris impact highlighted in red, and text underneath stating “Example of risk management.”  Directly beneath that is the topic classification, “Business administration.”  I have to admit, it’s a brilliant ploy, aligning an engineering function with a business administration one in the risk management aficionados’ desperate attempts at glomming on to legitimacy.

But it’s another example of comically misusing a tool. Here’s why.

To an engineer, “risk management” has nothing to do with management. It deals with the characteristics of materials, which are knowable. Specific designs using specific materials under various conditions can be analyzed, and their performance accurately predicted. Selecting steel over aluminum in building, say, those areas of the International Space Station most likely to encounter debris impact is the result of quantifiable, repeatable analysis, based on the likelihood of debris striking that part of the station, the added costs of lifting heavier components into orbit, the ability of certain designs and materials to absorb impact, etc., etc.

Not so managing a project. Project teams are composed of people, whose characteristics most certainly cannot be precisely quantified, much less predicted under specific circumstances. As I pointed out in last week’s blog, even in iterations of the Ultimatum Game, where choices available to the participants was limited to one decision each, the analysts completely missed the most likely strategies employed by the players, instead predicting the tactic that almost never worked. With that being the case, what are the chances (get it, risk managers?) that, with the number of available decisions open to members of the project team, customers, and other stakeholders, the likely outcomes can be captured through statistical analysis?

Oh, risk management isn’t about calculating likely outcomes? Then what is it about, exactly? That’s one of the most frustrating things about debating this topic with their true believers. They start with this whole business about how risk management is about “anything that impacts the project, positive or negative,” but then fall suddenly silent when asked to produce any report based on their analysis techniques that actually helps inform project managers’ decision-making. “The odds are X that something bad will happen to your project” does not help your typical PM, and modifying that to “the odds are X.N% that occurrence Z will happen to your project” isn’t any better.

What’s clear to me is that the risk managers took a concept that was perfectly legitimate in the engineering realm, and converted it to the project management arena, where it most certainly does not apply. When challenged on this, the risk managers I know point to RM’s legitimate uses, which are not in Project Management. But – again – people are neither structures nor materials! Need more evidence? Consider that crucible of PM, the Agile/Scrum Project Team. One of the many blessings that Agile/Scrum brought to the management science table is that they burned away some of the trappings of traditional PM that tended to be so overdone, they bordered on being superfluous (e.g., highly formal change control techniques). So, you Agile/Scrum Project Team members – do you even have a risk registry? No? Could it be because such teams are based on the decisions and choices made by the team members themselves? They behave in ways that cannot be predicted, nor quantified, and those behaviors are the key determinants of project success.  The sleight-of-hand involved in conflating engineering uses of RM and its alleged role in project management, as intellectually vacuous as it is, has taken in so many in the business realm that risk management is a multi-billion dollar industry world-wide.

But it’s completely invalid. And, if you believe to the contrary, my only advice is to closely inspect any cutting boards your children gift you.

 


[i] Risk management. (2016, August 24). In Wikipedia, The Free Encyclopedia. Retrieved 01:23, August 28, 2016, from https://en.wikipedia.org/w/index.php?title=Risk_management&oldid=736035884

Posted on: August 29, 2016 10:20 PM | Permalink | Comments (0)
ADVERTISEMENTS

"I would have made a good Pope. "

- Richard M. Nixon

Test your PM knowledge

ADVERTISEMENT

Sponsors