No self-respecting management science blog with the term “game theory” in its title can go long without using that favorite analytic tool, the Payoff Grid. What can a Payoff Grid add to the Sustainability discussion (ProjectManagement.com’s theme for April)? Plenty, as I will demonstrate.
GTIM Nation is familiar with a couple of themes I would like to revisit, one of which is my personal implementation of the Pareto Principle, that the 80th percentile best managers who have access to 20% of the information needed to obviate a given decision will be consistently out-performed by the 20th percentile worst managers who have 80% of the information so needed. If we add in another theme that I’m in the habit of addressing, that of so-called PM experts joining guidance-generating organizations that churn out a whole bunch of, well, nonsense, we have the two main ingredients needed to set up our Payoff Grid.
On one axis, let’s put managerial expertise, with the extremes represented on the one end by managers who are either inept or possess personality traits that makes them extremely poor leaders(or, sadly, both), and on the other end by extremely capable managers, blessed by both an advanced technical understanding of PM and the scope being pursued by their team(s). On the second axis we’ll use as the gradient the nature of the management information systems feeding them what they need to know in order to make the optimal decisions for the attainment of their program/project/portfolio goals. Such a Payoff Grid would look like this:
Table 1. Sustainability Payoff Grid
Before reviewing the Scenarios themselves, here is my list of the attributes of an advanced Project Management Information System:
Based on this standard, GTIM Nation can see why I have so much heartburn with modern risk management theory, as-practiced. Having someone – anyone – tell you, say, that there’s a 67% chance that you will experience a weather delay will almost never be accurate nor relevant.
So, on to the individual scenarios.
Scenario C exists far more often than it ought and, as I pointed out in my two previous blogs, some of the blame rests with us practitioners. If the people setting up the baselines are far more interested in the risk management plan than they are the critical path schedule network or the Earned Value Management system, then the PM information systems created will be virtually worthless, feeding a poor PM to boot. It will be next to impossible for this Scenario to result in a sustainable system.
Scenario D will also not result in a sustainable system, but for very different reasons from those underpinning Scenario C. An advanced PM who is successful not because of, but in spite of the deficiencies of her PM Information will lose patience with those who designed and set up the baselines, and will select very different personnel/systems in the future. So, no sustainability here.
Scenario A improves the Sustainability odds considerably, due to Hatfield’s application of the Pareto Principal. Oh, sure, there’s a chance that the poor manager will not recognize that his success rate is far more predicated on his ability to make informed decisions rather than any innate ability; but, for the most part, he will not only recognize the essential nature of the Earned Value and Critical Path information he has consumed, but will insist that these relevant information streams be set up in all future projects he works. Sustainability achieved!
Finally, Scenario B holds the best chance that a given PM maturity is sustainable. Successful PMs tend to attract more work, and are additionally better attuned to the relative merits of the various management information systems available for any given budget. The odds of a consistently successful PM basing cost performance decisions solely on, say, the information coming out of the General Ledger are remote indeed. These PMs’ strategies will contain the ability to discern which information system characteristics are vital, and which have nothing to do with an enhanced ability to advance the implementation of those strategies. The vital ones will remain, or become “sustained,” while the others fall by the wayside.
Not to put too fine a point on it, but, as the Payoff Grid shows, a relevant, accurate, and timely PM information system is far more likely to become Sustainable, even in the event of being paired with a poor manager, whereas a poor PMIS is pretty much doomed, no matter the level of PM expertise it’s supporting.
Is your particular system going to be Sustainable? Check Table 1, and I’ll get back to you.
In my previous blog posts on Sustainability (ProjectManagement.com’s theme for April), I addressed how to overcome barriers to achieving some level of sustained PM capability, both from pressures internal and external to the team. This week I thought I’d take a step back, and ask: why are there pressures working against advancement of Project Management capability in the first place, not to mention energy working against its sustainment in the event your organization even gets to an advanced PM state? I mean, think about it: are there pressures against more advanced engineering precision? More advanced talent recruitment? Better communications? When such pressures are discovered, they’re almost always immediately recognized as clear symptoms of some sort of business model pathology. But not Project Management, no siree. Engineers, HR recruiters, and even communications specialists can (and do) rail against certain aspects of an advanced PM capability, and can do so with relative impunity. Why is that?
Could one reason be that they have a point, that there are several aspects of what is considered to be a more advanced PM capability that are truly wastes of time and money? I sort-of touched on this last week, but laid much of the we’re-doing-this-to-ourselves blame on nefarious but unnamed guidance-generating organizations, and much of the blame does, indeed, belong with them. But in many cases our own organizations aren’t being forced to take this guidance at face value. We PM-types could, for example, perform an actual management science-based eval of the management information streams within our PMOs, and discontinue those that don’t truly add value.
The problem with performing an experiment on the efficacy of competing management information streams is that, even if the results point to a clear winner, the losing MIS stream’s advocates can always dispute the findings (usually with some sort of word salad), or ignore them altogether. However, bets are a little harder to ignore. So, I’m proposing a series of bets, starting with Earned Value versus risk management. Here’s how it could work.
For you PMO Directors out there, pick a medium-to-large project within your organization’s portfolio. Call a meeting with any Project Controls Specialist and your risk managers, and give them the following task: given a list of the Control Accounts (or even Work Packages) that are at least 20% complete within the project, forecast which tasks will overrun their original budgets or go past their original baseline dates, and which will not. Take the lists and compare them to each other, and archive the agreements (for overall accuracy rates). You will be left with a list that the EV analysts say will overrun/come in late, and a different set of tasks that the risk analysts predict will do the same. Then, when those tasks are actually complete, compare the late/overrun list with the predictions.
But, before we get all the way to comparing the predictive capabilities of risk management systems with basic Earned Value, I’d like to point out the data set each specialist will need to make their predictions. The EV specialist will need:
…and that’s it. Conversely, the risk management specialist will need:
The comparative ease with which the EV specialist can collect the objective data needed to perform her analysis compared to the difficulty involved in gathering the risk manager’s almost completely subjective data should, all by itself, be a strong indicator as to which PM analysis technique is going to win this little wager. I mean, seriously, no SME in the world is going to be able to reliably provide a definitive list of the tasks’ scopes’ alternatives, along with anything resembling an accurate estimate of those alternatives’ duration and costs, let alone odds of occurring. And feeding all that subjective data into either a decision tree analysis structure, or a Monte Carlo simulation, will not overcome those problems.
However, since I know beforehand who is going to win this bet, I’ll be magnanimous. We’ll choose to not compare the data collection minutes needed by the EV specialists with the hours needed by the risk manager, and put the entire basis on its performance results. I’m extremely confident of the outcome; however, if a member of GTIM Nation is in a position to actually perform the experiment, I’d love to hear the real-world results.
So, I’ll return to the question in the title. If your PMO includes a significant risk management component, and that particular information stream is easily out-performed by a far simpler, cheaper method, it certainly raises the question: Does any PMO that includes a significant but irrelevant management component deserve to be sustained?
Official GTIM Nation Members are aware of an axiom that I take as valid, which is Affordability, Availability, Quality: pick any two. Specifically,
I’m reminded of this axiom due to a recognition of a systemic barrier to sustaining an advanced (or even adequate) level of PM capability maturity, a barrier which stems from the very nature of organizations that let contracts and those that write proposals for them: the winner tends to be the lowest bidder. Circling back to the axiom that opened this blog, this tendency will pretty much automatically settle the Affordability aspect of the triple constraint. Organizations that maintain a highly mature, or quality Project/Program Management Office, with personnel who can be made available within a couple of weeks of the contract winner announcement, are not likely to be cheap, putting them at a distinct disadvantage in such competitions.
This leaves two alternatives: the Quality PM teams are likely to be late for the start of the project, while the Available PM teams might not be aware that a Cost Variance is definitely NOT a comparison of budgets to actual costs (keep this example in mind – it’ll come up a little later). Now, I’m sure there are lots of times when a winning Project Team has some latitude in hitting the ground running, and can have a robust Performance Measurement Baseline up and reviewable very soon after the project has officially started. But I’m also pretty sure this isn’t the norm. Customers tend to expect the level of quality they paid for to be displayed fairly early in the project, otherwise they wouldn’t put all those infernal requirements for the resumes of “key personnel” in the Request for Proposal.
So, what does all of this leave us? If we assume as true the aforementioned axiom, and fold in the fact that most major projects are let to the lowest bidder, we have a systemic force that’s set up against the very thing that PM-aware organizations are trying to establish and sustain: an advanced capability in Project Management. The deck, as they say, is stacked against us. That being the case, we PM-types can certainly expect that those who claim to be within our ranks are not actually making things worse, right? Right?
Well, no, actually. Consider that we’re talking here about a capacity to sustain some level of PM maturity, and the Affordability card has already been played. The winning contractor is attempting to assign the talented PM peeps to the new project as soon as possible, where they can set up the necessary project management information systems (PMISs) that will provide the data needed for the actual PM to make informed decisions. Let’s say, for the sake of argument, that the talent is actually available, and on-site (available) for the beginning of the project. They set up the baselines in record time, and are ready to pull status. What’s standing in their way? The list includes:
What we have here is, even in those instances where the winning bidder (remember – lowest cost!) has a quality team available, there are forces internal to the PM community who seek to add irrelevant standards of what they assert to be “quality” project management, making the third aspect of service/product delivery that much more unattainable, and, therefore, unsustainable.
So, to answer the question in the title, yes, the sustainability game is rigged against us. The kicker is that, through those nefarious guidance-generating orgs, we’re kinda doing it to ourselves.
When we’re talking Sustainability (ProjectManagement.com’s theme for April), my first reaction is to recall Carnegie Mellon University’s Software Engineering Institute’s (SEI’s) original Capability Maturity Model®. This model stipulated[i] five “Levels,” with my interpretation of them so:
Okay, let’s get back to Level 3. Since almost every organization pursuing a more advanced PM capability is enmired somewhere in Levels 1-3, Level 3 becomes something of the brass ring in the Sustainability Sweepstakes. In fact, whenever you hear some new executive assert the idea that he or she will take the organization from Level 1 to Level 4 within a certain span of time (why does eighteen months keep coming up?), GTIM Nation should take it as an automatic sign of cluelessness.
Ennyhoo, we’re trying to get to Level 3, and stay there. Don’t get me wrong – you can get some wunderkinds in the fold, and when they start to export your brilliance everybody has a good day. But that’s kind of rare, even among reputable PM consulting firms. Most orgs would be thrilled just to get to the point where everybody’s pretty much doing the same thing in PM expertise space, and top execs get to host Project Reviews where half the room is speaking intelligently about To-Complete Performance Indices without the other half of the room looking like German Shepherds at a physics conference. So, what’s keeping you from being the champion that gets your organization to Level 3? Some pretty common organizational behavior pathologies, particularly ones that tend to specifically blow up PM initiatives, that’s what. Here’s a partial list.
But sustainability barriers don’t have to be valid in order for them to be effective.
[i] The Capability Maturity Model: Guidelines for Improving the Software Process, Carnegie Mellon University Software Engineering Institute, Addison-Wesley, 1995, pp. 16.
I get it. I really do. Even the best Project Managers can get to a point where their capacity to adapt to new circumstances, organizations, locations, technology advances … the list goes on and on of those things that force us to modify our strategies and tactics if we are to succeed. It’s easy to look back on previous successes, and desire to repeat that success by, well, repeating those managerial approaches, confident in a repeat of the outcome.
But that’s the catch, isn’t it? By definition, every project is unique, sometimes dramatically so. And there’s really no way of quantifying which PMs were successful due to re-loading tried and true strategies, and which were successful by significantly deviating from those very same strategies. Actually, it’s fairly difficult to differentiate the consistently successful PMs from the repeat failures, due to the wide variety of cover-the-backside techniques, so that coming up with a litmus test for telling the successful ones clinging to traditional strategies from the radically innovative ones is probably impossible.
I know this from personal experience. I had been assigned to set up the cost/schedule performance systems for this one major project where the PM insisted, at the very first project team meeting no less, that he wanted a “swim chart” as his primary method for tracking the project.
“You know, a swim chart!”
“Umm, do you have an example?”
With a sigh and a roll of the eyes, he went to the white board, where he drew a comically crude PERT Chart, with the tasks arranged by organization down the X axis.
“Oh, okay, I see what you want. We’ll need to begin with a Work Breakdown Structure.”
“A Work Breakdown Structure. It’s the way we decompose the scope into the tasks and activities that we can then sort by performing organization.”
“Look, I don’t want to hear about any of that. I just want a swim chart! Can’t you just give me that?” (Point of fact: when a manager starts repeating the word “just,” it’s a sure sign that they don’t have a clue of the difficulty associated with the fulfillment of their demands.)
“Not without some groundwork first” I replied, as meekly as possible.
He called my boss and had me removed from the project.
In retrospect, I understand his frustration. It was clear that, on some previous project where he was involved, this report had been generated on a regular basis, and much of the decision-making had been predicated on it. This PM probably didn’t realize that the org-sorted PERT Chart wasn’t simply a cartoon graphic, or the output of a singularly immature system. I just happened to be the unfortunate project controller who broke the news to him that he couldn’t “just” have one quick and easy.
This why-can’t-it-be-just-like-my-last-project phenomena occurred again, when my bosses’ boss, a fairly seasoned executive, wanted to initiate a new action item tracking system, and wanted me to head up the implementation effort. With the software’s reps on the conference call speaker phone, this exec was very ready to spend some serious coin to make it happen. Like an idiot, I had to ask a question.
“I see you have categories for ‘Action Items,’ ‘Issues,’ ‘Concerns,’ ‘Trends,’ etcetera. Do you have a precise definition for each of those categories? I mean, what’s the exact difference between an ‘action item’ and a ‘concern’?”
“We let the individual users define those.”
“What difference does that make?” stormed the exec.
“Well, it’s just that a given ‘issue’ could show up in a wide variety of systems already in place, or even in multiple categories within the same system. If the same action item shows up in multiple places, and the particulars aren’t in agreement, how do you know which one is correct?” I then addressed another question to the software vendors. “Does this system reach into other tracking systems?”
“Then, if one item does show up in multiple iterations, and everybody’s not in complete agreement, how will we know which system has the more reliable information?”
I could see the exec getting visibly upset.
“Look, if you have your heart set on this system, I don’t want to be the barrier. I’ll work whatever implementation effort you choose.”
The exec called my boss and had me removed from the project.
Along about this time my capacity for sympathy for managers desperate to recreate previous, comfortable environs was drying up. This exec clearly had no concept of how valid Management Information Systems are set up, or function, and had fallen prey to a slick and shiny software package that had made some inchoate promises along the lines of how they could provide an easily-understandable report that told him what he needed to pay attention to that day, or that week, and this pesky PMP® was throwing cold water reality on the object of his fascination.
Again, I get it. Having to respond constantly to situations that are only marginally analogous to ones where we were confident that we had the optimal solution gets old really quickly. But that having been said, how many parties do “managers” have to attend where they insist that the hostess change everything to match the parties the managers had previously attended before they don’t get invited any more?