Risk & Reward

by
I'm a risk enthusiast who likes to discuss techniques, tools and models—and use risk as a practical means to make better decisions in project environments. Join me on the ride!

About this Blog

RSS

Recent Posts

(mis)Understanding Contingency

Monty Hall for Projects: To Switch or Not to Switch?

10 Wishes for your Project Risk Management adventures in 2019!

The Curse of the Moving Mountain

A Template for Setting Up Your Qualitative Risk Analyses

The Curse of the Moving Mountain

When we undertake risk analyses, we are subject to our curses and nightmares. I would like to highlight one of them: the moving steep mountain! In general, our projects (at least the ones I’ve been working on) have some characteristics:

  • They are challenging on cost and schedule (and that’s okay!)
  • They are planned backwards, starting with the end date (and that’s okay, too!)
  • They don’t fit the time available from start to finish (Not okay!)
  • In general, you only find the previous bullet out when you are already working on the project (Not okay!)
  • Considering all the previous bullets, you have an unfeasible schedule (Sorry about that!)
  • Upper management is not willing to delay or add cost (Not okay, again!)

In the light of everything I listed, what do you (usually) do? You compress the schedule! You start doing crashing and fast tracking like crazy. And if you do a schedule risk analysis, you’ll see that the probability of meeting the dates tend to be very low.

In addition, you are a victim (by your own doing!) of the merge bias! This was detailed by Mr. Hulett in his book “Practical Schedule Risks Analysis” (Gower, 2009). It happens when you have a lot of parallel paths that meet in a given task of your schedule.

Suppose you have three tasks that take 5 days each in series and you “fast track” them into three tasks (of six days each) in parallel. Suppose uncertainty is a triangular distribution with the lower point at 70% of the base value and the upper point at 150% of the same value. The most likely value is the base value. When you simulate both cases, you end up with something like this:

This simulation was done using @RISK. We can see that the probability of having a value lower than the planned one is over 25 percent for the original (series) project, whereas the “fast tracked” one (parallel) has a little over 5 percent for the same situation. The parallel paths hold a larger chance of failure, and the waterfall can accommodate a larger task with a shorter one in sequence.

When we don’t consider the risk events in the simulation and we use small ranges on the variation of tasks, we end up with a very unlikely and steep distribution. That’s when the unfeasible schedule takes its toll: when the inevitable reality happens, the risks start occurring and the milestones are missed, and our planning becomes impossible.

But never fear! The management has a solution for that as well! You shift the schedule and move the mountain a little to the right. And that small probability still remains, but it is less and less credible. Eventually the project will be completed, but what is risk management doing to bring value to the table? And the answer is… NOTHING!

It would be much better to have a wide distribution considering events and broader dispersions, which we could slice into different regions and analyze for determinant factors. See below the comparison between the “moving mountains” and the “big hill”.

 

Let us go for the big hill, then! Let us embed the events in our analyses. Let us shed some light and free ourselves from the curse of the moving mountain and the habits that make management look like zombies.

PS: This post was inspired, of course, by Halloween but, ironically enough, came to life a bit too late! Thank you for reading! Looking forward for your feedback!

Posted on: November 05, 2018 12:37 PM | Permalink | Comments (6)

The GIGO factor

Hello again! Today I am covering what I think is a top five threat in a Schedule Risk Analysis or any simulation / numerical exercise: the destructive power of GIGO. It can send the whole team in a wild goose chase, or calm things down when your foot is halfway into the abyss.

 For those of you who do not know or cannot remember, GIGO stands for Garbage-In-Garbage-Out. Computer models, especially those who rely heavily on assumptions and constraints to run, such as our simulation models, are prone to suffer from that factor. The term dates back to 1957, as far as we can trust Wikipedia for that, with a citation of a weapons specialist saying, “‘sloppily programmed’ inputs inevitably lead to incorrect outputs”. We can observe two main GIGO possibilities when we are simulating our schedules.

 The first one relates to the model itself. That is, if you have a schedule which is faulty, incomplete and lacking the proper detail, no good can come of doing anything but... fixing it! This may be a structural problem and require a review of the WBS and even that very dangerous but necessary question: “What is really the purpose of this Project?” There are tools for detecting a bad schedule, but no straightforward tool for detecting a bad scope. I can easily detect tasks without successors or with hard date constraints, but I cannot, without a real understanding of the Project, state that the scope is ill detailed or the breakdown does not really make sense. It can be a tricky thing.

 The second GIGO factor is the modeling of the simulation itself. Some people are mesmerized by the mere presence of a histogram or a tornado chart. They go: “Wow, so there is a 10 percent chance that we will meet the promised date. I’d better find another job”, or “No way in hell this is right, the modeling is all crooked”. This is why, in my opinion, we must not show any simulation results until the modeling is complete. Until we discussed the distributions, their upper and lower values or other ways to describe them and the risks events, we must hold our instinct to show those beautiful features of the Monte Carlo simulation. What can be worse than having a black box model that shoots numbers left and right? I will tell you: it is having a guided simulation, that is, someone saying, “The modeling doesn’t matter, but the figures for P10, P50 and P90 should be this and this and that”. This is the worst GIGO ever: a confirmation of what is expected just because it is... well, expected!

 Maybe we can model things wrongly just because we lack the training or we are just doing it wrong. That is an honest mistake. However, we must be careful using a tool that powerful, especially when we are in a company with low maturity regarding risk.

I am including a small GIGO-avoiding checklist; feel free to add more items!

  1. Check the schedule before you do anything else;
  2. Have all assumptions and constraints formalized;
  3. Have some quick documentation on the distributions you are using and why, who gave that input, etc.;
  4. Do the same for the events you are modeling;
  5. Make sure whoever models and/or runs the simulation has experience with the software and the technique;
  6. Make sure someone else does a check on the simulation, specially looking for errors and strange results;
  7. Look into the Tornado chart and make sure these correlations, regressions or whatever index you use make sense;
  8. If you are using critical index, evaluate the connection between the values and the ones you observed on step 7;
  9. Prepare a “risk story” for this project: if you were to present it to someone, how would you go about it? Does it make sense for you?
  10. Double-check and validate with external sources, if possible, to avoid the unavoidable biases.

I hope those ten steps help reduce the GIGO issue. Do you have any more tips? Let me know! Thanks for reading!

Posted on: August 22, 2018 04:11 PM | Permalink | Comments (12)
ADVERTISEMENTS

"Four be the things I am wiser to know: Idleness, sorrow, a friend, and a foe."

- Dorothy Parker

ADVERTISEMENT

Sponsors