PMBOK® Guide for the Trenches, Part 4: Risk

From the Voices on Project Management Blog
by , , , , , , , , , , , , , , , , , , , , , ,
Voices on Project Management offers insights, tips, advice and personal stories from project managers in different regions and industries. The goal is to get you thinking, and spark a discussion. So, if you read something that you agree with--or even disagree with--leave a comment.

About this Blog


View Posts By:

Cameron McGaughy
Marian Haus
Lynda Bourne
Lung-Hung Chou
Bernadine Douglas
Kevin Korterud
Conrado Morlan
Peter Tarhanidis
Mario Trentim
Jen Skrabak
David Wakeman
Roberto Toledo
Vivek Prakash
Cyndee Miller
Shobhna Raghupathy
Wanda Curlee
Rex Holmlin
Christian Bisson
Taralyn Frasqueri-Molina
Jess Tayel
Ramiro Rodrigues
Linda Agyapong
Joanna Newman

Recent Posts

Mix & Match

Agile Evolves

3 Tips to Enhance Your Leadership IQ

3 Tips for Becoming a Better Listener—and a Better Project Manager

Maximizing the Value of Agile

Categories: Risk Management

If PMI ever invites me to rewrite the risk section of A Guide to the Project Management Body of Knowledge (PMBOK® Guide) I think there are two things I would change.
The first deals with the inclusion of "upside risk," or opportunity, as part and parcel of risk management. I don't think it belongs. As my exhibit A, I cite the Oxford English Dictionary definition of risk: 1 a situation involving exposure to danger. 2 the possibility that something unpleasant will happen. 3 a person or thing causing a risk or regarded in relation to risk: a fire risk.

As author Mark Twain said, "Beware the man who would win an argument at the expense of language." Beyond the semantics, though, let's consider the three most prevalent ways of analyzing risk and see if they apply in managing a proposal backlog (a listing of an organization's outstanding and upcoming job bids -- or opportunities).

The simplest (and crudest) risk analysis technique is classification, in which you basically go through your work breakdown structure at whatever level and assign high-, medium- and low-risk classifications to the tasks. Associate each classification with a percent, e.g. high may mean 50 percent, medium 25 percent and low 5 percent.

Multiply the percentages by the original budget/time estimate, and you've done a risk analysis (of sorts). Try this with the proposal backlog, and you'll inevitably look astonishingly inept.
Then there's decision tree analysis. For each activity, assign alternative endings, with their impacts and odds of occurrence. Unfortunately for "opportunity" management, the only two possible outcomes of a submitted proposal are that you either win the work or you don't. Data on the gray middle is pretty useless when there's no gray middle.

Finally, Monte Carlo analysis is essentially a decision tree on steroids, with lots of statistical chicanery thrown in.

My second objection has to do with the use of risk management after the cost and schedule baselines have been set. I agree that prior to the finalization of the baselines, risk analysis is crucial to identifying and quantifying cost and schedule contingency amounts. The risk analysis can lead to informed decisions on how much and what type of insurance to buy, and what sort of alternative plans should be in place if a contingency event occurs.

But once the baselines are final, persisting in risk management strikes me as institutional worrying expressed in mind-numbing statistical jargon. To what end? Unless the response to a contingency event (in-scope, uncosted) was to significantly change from how the project team would have reacted normally, what difference does it make if it was anticipated?

I'm looking forward to the responses to this, and not necessarily from just the risk management aficionados.

Update: Risk management experts and enthusiasts are encouraged to join PMI's new Project Risk Management Community of Practice

Posted by MICHAEL HATFIELD on: April 20, 2010 03:15 PM | Permalink

Comments (14)

Please login or join to subscribe to this item
Patrick Weaver
There is a difference between simple and simplistic. The approach to risk suggested in ‘PMBOK® Guide for the Trenches, Part 4: Risk’ is simplistic!

Firstly the same risk – a future uncertainty – can have both an upside and a downside. Failing to manage the upside equates to guaranteeing failure. Future weather conditions are a risk; they could be good or bad. A major motorway near my home town was finished months early and under budget because they were lucky enough to build the project at the tail end of a 10 year drought. The last few months have had above average rain. If the people building the road only worried about the ‘downside’ risk the road would only just be finishing now. A similar example is the management actions taken to accelerate work on the Panama Canal through the GFC to take advantage of the upside risk of lower construction costs.

Second, the environment around projects does not stop changing just because someone has signed off a cost performance baseline. Ongoing risk assessments are critical to avoid surprises; good or bad! The more warning of changed circumstances the project team have the more likely they are to manage the situation effectively.

Where we do agree is on the mumbo jumbo of statistical paralysis many so called risk management systems bog down in. The purpose of risk management is to identify opportunities and threats and then actually do something about them. Recording risks in a risk register and then qualitatively and quantitatively analysing them is a complete and total waste of time unless someone actually takes action.

The biggest weakness in the PMBOK® Guide is the total omission of a process for treating risks. The idea of risk treatment is implied but not overtly set out as a process which allows people to think identification and analysis is the end game. Unfortunately managers need to make decisions based on the risk assessment and then take actions if risk management is going to deliver any benefits at all.


Boris Keylwerth
Thanks for kicking off this discussion.

I agree on your first item. Practically, we might identify opportunities during risk analysis, but usually we don't do anything with them during the project. For example, if the unlikely event "X" occurs, we can make additional profit. Should it occur after all, we are most likely to gain some nice profit. Should the event be likely, we would consider changing our priorities.

Still, I think the conscious process to think about what positive and negative events can effect a project is essential. Maybe we need an opportunity management process for "positive" risk. This can be as simple as adding opportunities to a portfolio. Your argument on how we assess opportunities is very valid, it does not make sense to apply common risk management processes.

I have a different view on your second point. I think it is quiet important to regularly re-assess risk. Primarily as conscious exercise to understand if there are new risks, or how risks have changed. This is especially important if you are working in an environment where you do not get a budged to address risk, or where you can take pro-active actions. You might argue that this could be part of schedule management. That is also an option. The regular and conscious process is what helps me keep control.

Dr David Hillson - The Risk Doctor
This post is so wrong that it's hard to know where to begin. Michael and I have debated this before and agreed to disagree. But his discussion is surprisingly weak.

The first problem is that Michael compares apples with orangutans. On one hand he has negative project risks ("threats"), and he compares those with a proposal register ("opportunities"). But they are different. He should be comparing project threats (an uncertainty that hinders achievement of a project objective) with project opportunities (an uncertainty that helps achievement of a project objective).

Second his risk "techniques" are so simplistic that they wouldn't work even for project risks. No-one evaluates a risk by multiplying probability and impact (or at least they shouldn't). Of course the outcome of a proposal decision-tree is win or not, but this is not the case for a project decision-tree. And he clearly has no idea how Monte Carlo works - where does the min estimate come from if not from the operation of opportunities on the base estimate?

It is easy to win an argument if you set up a non-credible position then knock it down. Michael's post doesn't do justice to the real debate over including upside risk in an integrated risk process. I've been doing this for over 10 years and it really works! Thousands of project teams and organisations gain huge benefit from proactively managing opportunity through the risk process, that's why they do it. And the PMBOK® Guide is right to describe it that say.

Sorry Michael, your diatribe isn't persuasive at all.
from David Hillson (The Risk Doctor)

Glen B Alleman

I'm puzzled how you've come to these conclusions. Monet Carlo Simulation has nothing to do with decision "trees" since the topology of a network of work activities (a schedule) is not a tree, but a stochastic network. The probabilistic attributes of task durations and their associated costs are modeled in tool available “over the counter” – Risk+ and @Risk for Project are two, there are many others.

This approach to modeling the “credibility” of a schedule is mandated in DoD and NASA programs through DID 81650. It is widely used in other domains as well. To suggest it is statistical chicanery seems to imply you don’t understand the principles of schedule risk modeling.

Next you suggest that risk modeling has little value after the schedule is on baseline. This is completely wrong. Or as Wolfgang Pauli was fond of saying to his graduate student “This is not right, it's not even wrong” or my new favorite “that theory is worthless it's not even wrong.”

After the Performance Measurement Baseline is established is the most important time to model programmatic and technical risk. All schedules have probabilistic cost and duration behavior, driven by technical performance, productivity, and capacity for work. Past performance in these areas drives future performance of the project. Modeling the impacts of past performance on the future performance is core to Earned Value Management and Monte Carlo Simulation of programmatic risks.

Performing the risk analysis on a periodic basis – we do this weekly after the CAMs complete their weekly EV status – is an imperative for program success. Without this process performed weekly or at least monthly, the program’s future performance cannot be understood.

To not do this would mean the planning and execution processes are disconnected and the work done to identify, mitigate and retire the risk is “launch and leave.” This is a serious misunderstanding of how programs are managed. This is not institutional worrying, it is credible program management.

I’d suggest some homework is needed here to understand how EV and Monte Carlo Simulation are connected on actual program. Start with “Performing Statistical Analysis on Earned Value Data,” Eric Druker and Dan Demangos, Booz Allen Hamilton and Richard Coleman, Northrop Grumman Information Systems, Then move the Bayesian management of project schedules literature that guides our efforts in aerospace and defense and well as large construction.

As to your conjectures around opportunity management, start with Edmund Conrow’s AT&L article
(You’ll need to answer yes to the certification error to gain access to the .mil site).

I have the sense your speaking about these topics without the benefit of performing this role or engaging with those who perform the role on large, complex and mission critical programs.

Glen B. Alleman
Program Planing and Controls
Aerospace and Defense

Glen B Alleman

Michael's conjectures are just that. Agreeing to disagree is not the proper term. The current US DoD, DOE, DHS, and NASA programmatic and technical risk management processes are clear and concise. If my memory serves my right Michael works in a DOE context and should know from the 413 series how risk management is performed. Maybe not??

Michael Hatfield
Wow, great discussion!

When I began this “PMBOK Guide® for the Trenches” series, I knew that the post on risk would run contrary to the prevailing wisdom.

David: Great line about comparing apples to orangutans, but that’s my point! Using the same techniques or approaches in managing threats AND opportunities can only be discussed seriously if the commonly-held definitions of the principal terms (risk, threat, opportunity, upside, etc.) are thrown out, and fuzzier, more trendy definitions take their place. Also, I’ve noticed that hard-core risk management types have a predilection to assert that anyone who disagrees with them does so from ignorance; but, without getting into a resume’ comparison, that’s not the case here.

Both Mr. Keylwerth’s and Mr. Weaver’s points are well-made, but I would ask that we take a step back and look at the whole risk management within project management paradigm. A PM is sitting at his desk, receiving bits of data and information, some of it accurate and actionable, others less so. He makes decisions to take advantage of opportunities and avoid threats. Is he engaged in risk management per se, or is he just managing in general? Is the acid test one of the use of probability and statistics, versus acting on a hunch, or some other less-thoroughly-documented analysis?

I have worked with "project managers" who took such simplistic views of basic project management practices. "Risk management? We don't have time for that; we're six months behind schedule and working fourteen-hour days as it is." There are many treatments of risk management that are both more complete and lead to greater success than Michael's views. In addition to the commenters, above, I suggest the works of De Meyer, Pich and Loch, not to mention the PMBOK and various (freely available) DoD publications.

L Collins
I must agree wholeheartedly with Dr. Hillson's response. Michael's assertions are so ludicrous I can only imagine that they are provided to goad on a red herring discussion to the obscene. Shame on the PMI for allowing this to be posted in the first place.

Marcus Wilkins
I find the comments in response to the original blog to be unfair and bordering on censorship. The stated purpose of the blog is to inspire discussion, not to proclaim one’s own beliefs to be beyond repudiation. One has only to read the PMI disclaimer to understand the goal.

Michael’s post has merit if only to challenge the status quo. Perhaps the pundits have never experienced the paralysis associated with risk management, but there are plenty of project managers who employ crude technique in order to satisfy the requirement. The result is indeed unsatisfactory, largely based upon a lack of understanding of the tools and the goals.

Let’s not get caught up in the abstract, folks. Snarkiness is not next to godliness.

Glen B Alleman

Challenging the status quo has merit if the status quo is flawed in principle and practice. Challenging the status quo to elicit comments is less noble.

Michael has misunderstood the applicable of Monte Carlo - a VERY mature process for both probabilistic and technical risk management.

I'd strongly suggest that David and I are not speaking in the abstract, but as practitioners of probabilistic risk management on programs and projects daily. This topic along with Michael's approaches to Earned Value are outside the status quo of mature principles and practices.

When Micheal uses words like "mumbo jumbo," it demonstrates either a lack of understanding and experience - which I doubt from his EVP and other certifications, OR a provocateur approach to a complex and difficult subject.

Julian Pearson
Well Michael certainly ruffled a few peacock feathers, which makes for a lively debate.

I have to disagree with Michael's indication that 'upside risk' does not belong and I challenge his Exhibit A as misdirection.

"RISK" is not whether an event will have a negative effect, but that an event will happen that could impact the deliverable to which it relates, in any way.

Contingent action planning need not be all about the reduction of risk, but also the exploitation of opportunity, which presents itself as a result ... e.g. having suppliers in countries where a collapse in their economy could suddenly make sourcing materials significantly cheaper.

RISKS need continual assessment against 'current' events, so that the likelihood can be updated ... it could be that events occur that cause contingency triggers, that require a funded and built contingent to be in place for a certain date, where originally only plans were drawn up.

The scoring assessment should always be done the same way, to ensure continuity in expectation, but if you have the time and understanding to drill into the plethora of different techniques, which might themselves then impact the scoring of particular elements (impact, likelihood, contingent cost) then so be it... but one would need the time to be able to run those deeper assessments at the same frequency as the event monitoring, if they are to be used at all, otherwise one is using old data for a 'current' score.

Time is always money, as we know... and part of the project/programme management remit is to apply common sense and pragmatism to available resources.

One "Risk" of over-complicating the analysis of risk, is that it generates such copious amounts of documentation, that the target audience fails to read it thoroughly, (if at all, other than the summary) and therefore fails to understand or challenge the reasoning behind the assessment, on which they may be required to make a strategic decision... making the very assessment detailed, pointless.

Michael Hatfield
"Challenging the status quo has merit if the status quo is flawed in principle and practice. Challenging the status quo to elicit comments is less noble."

I think Marcus Wilkins was spot-on, and here's why: after the assertion that I didn't properly understand the subject of risk management fell by the wayside, the next assertion presented was that, while I (perhaps) knew of what I wrote, I was deliberately making contrarian assertions in order to fill the role of "provocateur." This is incorrect. I write only that which I genuinely believe.

If Mr. Alleman truly believes the sentences I quote from him, then I would like to return the favor he extended in recommending reading material. "The Failure of Risk Management: Why It's Broken and How to Fix It" by Doug Hubbard effectively dismantles each of the underpinnings of risk management as it is currently practiced. Mr. Hubbard has also written several articles to this effect, and they are devastating.

But if Mr. Hubbard merely dismantles popular risk management tactics, then Nassim Taleb annihilates them in "The Black Swan." The brilliant and wildly successful Mr. Taleb argues powerfully that attempts to anticipate future events by trying to draw conclusions from historical trends or statistically-processed data are futile and, ultimately, misguided, and I very much agree with him. I do not believe these gentlemen are either (a) ignorant of the current risk management practices, or (b) challenging them in order to play the role of provocateur.

And I do hope I can influence more folks to agree with me.

Glen B. Alleman
Michael, Thanks for the references. I see now your frame of reference, and will excuse my self from the conversation.

Glen B. Alleman

Regarding your conjecture about applying risk management after the baseline is set, I'd hope you'd apply your EVP and read 2.4 of EIA-748-B to see that continuous assessment of the PMB is needed. This includes applying DID-81650 continuously as well.

This is standard practice in every EVMS-SD I've ever seen, and likely the same in the SD you apply.

Please Login/Register to leave a comment.


"You're talking to someone who really understands rock music."

- Tipper Gore