Early this month I posted a poll here asking how you determine contingency and how much is generally considered.
In my venues, I usually map the risks and uncertainties as good as possible and run a simulation to understand variation regarding my goal, being it cost or schedule. I then determine a level of contingency for the project and add extra days or dollars to arrive at the percentile level.
Reading some references on risk analysis, people usually choose some references for safety. David Hulett, in his "Schedule Risk Analysis" (Gower, 2004) generally highlights P80. So I started wondering: how safe should your schedule be? How safe should your budget be?
My answer for that is that it depends. Depends on whether this project is another one in a set of 40 or 50, or if it is the only project of your firm, or the first project you are undertaking.
If you have a single project, you may be quite worried and stablish a high level of contingency, not to be caught off guard. I did some numerical experiments assuming a project would have an estimated P50 cost of 100 thousand dollars with a standard deviation of 20 thousand dollars, normally distributed.
If you want a safety level of 50%, you would always be prepared for spending the 100 thousand dollars for each project. However, as you add more projects to your portfolio, considering the results are statistically independent.
The question is how much should you add to each project in order to secure a safety level for the whole portfolio. If you have a single project, you need to add an extra US$ 32,900 to reach a 95% safety level. That's an increase in almost 33% of your budget! In the other ned, if you have 30 projects in your portfolio, adding US$ 6,000 in each project will get you to the same level of safety. 6% extra increase for a good night's sleep! If you want to see the detail, there is a graph and a table in the end of this entry with the data.
We should consider portfolio size in our decisions regarding contingency. That is why some references tell you to aim for the P50 or the average value in the simulation. If you have plenty of projects, the overall value will converge to the average in most of the cases.
Please let me know your opinion on this one. Do you run Monte Carlo to determine contingency or safety levels? Do you impose those values directly into each project? Do you have a line in your budget for the whole portfolio?
Looking forward to hear from you!
A graphical depiction of the example.
A table with the values calculated for the example
Hello, and happy new year, everybody! It’s time for new beginnings and, of course, wishes! I made a list for 2019, with what I wish for you this year, regarding project risk management:
I hope you had an amazing year in 2018 and I look forward to having an even richer experience in 2019! I’ve learned a lot, and that is one of the great things of being alive and sharing this sense of community with all of you. Thank you all so much! Have a wonderful 2019!
When we undertake risk analyses, we are subject to our curses and nightmares. I would like to highlight one of them: the moving steep mountain! In general, our projects (at least the ones I’ve been working on) have some characteristics:
In the light of everything I listed, what do you (usually) do? You compress the schedule! You start doing crashing and fast tracking like crazy. And if you do a schedule risk analysis, you’ll see that the probability of meeting the dates tend to be very low.
In addition, you are a victim (by your own doing!) of the merge bias! This was detailed by Mr. Hulett in his book “Practical Schedule Risks Analysis” (Gower, 2009). It happens when you have a lot of parallel paths that meet in a given task of your schedule.
Suppose you have three tasks that take 5 days each in series and you “fast track” them into three tasks (of six days each) in parallel. Suppose uncertainty is a triangular distribution with the lower point at 70% of the base value and the upper point at 150% of the same value. The most likely value is the base value. When you simulate both cases, you end up with something like this:
This simulation was done using @RISK. We can see that the probability of having a value lower than the planned one is over 25 percent for the original (series) project, whereas the “fast tracked” one (parallel) has a little over 5 percent for the same situation. The parallel paths hold a larger chance of failure, and the waterfall can accommodate a larger task with a shorter one in sequence.
When we don’t consider the risk events in the simulation and we use small ranges on the variation of tasks, we end up with a very unlikely and steep distribution. That’s when the unfeasible schedule takes its toll: when the inevitable reality happens, the risks start occurring and the milestones are missed, and our planning becomes impossible.
But never fear! The management has a solution for that as well! You shift the schedule and move the mountain a little to the right. And that small probability still remains, but it is less and less credible. Eventually the project will be completed, but what is risk management doing to bring value to the table? And the answer is… NOTHING!
It would be much better to have a wide distribution considering events and broader dispersions, which we could slice into different regions and analyze for determinant factors. See below the comparison between the “moving mountains” and the “big hill”.
Let us go for the big hill, then! Let us embed the events in our analyses. Let us shed some light and free ourselves from the curse of the moving mountain and the habits that make management look like zombies.
PS: This post was inspired, of course, by Halloween but, ironically enough, came to life a bit too late! Thank you for reading! Looking forward for your feedback!
Recently I posted a poll right here in projectmanagement.com (here) concerning how you prepare a schedule to undertake a schedule risks analysis.
My idea was to understand how you out there see the question of getting a schedule ready for the simulation exercise. I gave you five possible answers:
My answer was number four, “I reduce tasks AND check the links. 17% of the respondents (139 in total) were with me on this. Now let me explain why.
In my case, I usually start with the schedule we use to monitor the progress. It tends to be quite detailed, and have a lot of information that we use to control progress, like issuing reports, preparing for meetings, doing governance, etc. All this tasks go away. Some procurement packages, for instance, have fourty steps and others have ten. I try to harmonize this, so the tasks have some similarity. This reduces a lot of work.
And of course I check the links between tasks, which is a simulation killer, one of the favorite GIGO drivers and a strong sponsor to terrible decision making. With this done, I move on to doing stress analysis and other tests to see which tasks are worthy to model with a distribution. Going further forward, I start consulting experts and doing data crunching to know how exactly I am going to model that. At last, but not surely least, I add events and their mitigation. You can check my series of articles on Qualitative and Quantitative Risk Analyses Integration, starting with this one.
Moving back to our poll, I always thought my answer would win by a landslide, but I understand all the other answers and I will develop a rationale for them, if you allow me:
Anyway, thank you for responding to my poll, thank you for reading, and please post comments whether you agree or not with what I said. See you all next time!
Hello again! Today I am covering what I think is a top five threat in a Schedule Risk Analysis or any simulation / numerical exercise: the destructive power of GIGO. It can send the whole team in a wild goose chase, or calm things down when your foot is halfway into the abyss.
For those of you who do not know or cannot remember, GIGO stands for Garbage-In-Garbage-Out. Computer models, especially those who rely heavily on assumptions and constraints to run, such as our simulation models, are prone to suffer from that factor. The term dates back to 1957, as far as we can trust Wikipedia for that, with a citation of a weapons specialist saying, “‘sloppily programmed’ inputs inevitably lead to incorrect outputs”. We can observe two main GIGO possibilities when we are simulating our schedules.
The first one relates to the model itself. That is, if you have a schedule which is faulty, incomplete and lacking the proper detail, no good can come of doing anything but... fixing it! This may be a structural problem and require a review of the WBS and even that very dangerous but necessary question: “What is really the purpose of this Project?” There are tools for detecting a bad schedule, but no straightforward tool for detecting a bad scope. I can easily detect tasks without successors or with hard date constraints, but I cannot, without a real understanding of the Project, state that the scope is ill detailed or the breakdown does not really make sense. It can be a tricky thing.
The second GIGO factor is the modeling of the simulation itself. Some people are mesmerized by the mere presence of a histogram or a tornado chart. They go: “Wow, so there is a 10 percent chance that we will meet the promised date. I’d better find another job”, or “No way in hell this is right, the modeling is all crooked”. This is why, in my opinion, we must not show any simulation results until the modeling is complete. Until we discussed the distributions, their upper and lower values or other ways to describe them and the risks events, we must hold our instinct to show those beautiful features of the Monte Carlo simulation. What can be worse than having a black box model that shoots numbers left and right? I will tell you: it is having a guided simulation, that is, someone saying, “The modeling doesn’t matter, but the figures for P10, P50 and P90 should be this and this and that”. This is the worst GIGO ever: a confirmation of what is expected just because it is... well, expected!
Maybe we can model things wrongly just because we lack the training or we are just doing it wrong. That is an honest mistake. However, we must be careful using a tool that powerful, especially when we are in a company with low maturity regarding risk.
I am including a small GIGO-avoiding checklist; feel free to add more items!
I hope those ten steps help reduce the GIGO issue. Do you have any more tips? Let me know! Thanks for reading!