(mis)Understanding Contingency
Early this month I posted a poll here asking how you determine contingency and how much is generally considered. In my venues, I usually map the risks and uncertainties as good as possible and run a simulation to understand variation regarding my goal, being it cost or schedule. I then determine a level of contingency for the project and add extra days or dollars to arrive at the percentile level. Reading some references on risk analysis, people usually choose some references for safety. David Hulett, in his "Schedule Risk Analysis" (Gower, 2004) generally highlights P80. So I started wondering: how safe should your schedule be? How safe should your budget be? My answer for that is that it depends. Depends on whether this project is another one in a set of 40 or 50, or if it is the only project of your firm, or the first project you are undertaking. If you have a single project, you may be quite worried and stablish a high level of contingency, not to be caught off guard. I did some numerical experiments assuming a project would have an estimated P50 cost of 100 thousand dollars with a standard deviation of 20 thousand dollars, normally distributed. If you want a safety level of 50%, you would always be prepared for spending the 100 thousand dollars for each project. However, as you add more projects to your portfolio, considering the results are statistically independent. The question is how much should you add to each project in order to secure a safety level for the whole portfolio. If you have a single project, you need to add an extra US$ 32,900 to reach a 95% safety level. That's an increase in almost 33% of your budget! In the other ned, if you have 30 projects in your portfolio, adding US$ 6,000 in each project will get you to the same level of safety. 6% extra increase for a good night's sleep! If you want to see the detail, there is a graph and a table in the end of this entry with the data. We should consider portfolio size in our decisions regarding contingency. That is why some references tell you to aim for the P50 or the average value in the simulation. If you have plenty of projects, the overall value will converge to the average in most of the cases. Please let me know your opinion on this one. Do you run Monte Carlo to determine contingency or safety levels? Do you impose those values directly into each project? Do you have a line in your budget for the whole portfolio? Looking forward to hear from you! A graphical depiction of the example.
A table with the values calculated for the example

Monty Hall for Projects: To Switch or Not to Switch?
I was wondering about the internet and I found yet another discussion about the Monty Hall problem. For those who are not following, a quick recap. There is a game show where you can choose between three doors. One has a prize, the others do not. After you make your choice, the host opens a door (without the prize, of course) and says that you have the opportunity to make a move, or maintain your position. Although most people thing that changing does not improve your odds, there is probability proof that changing actually doubles your chances! I won’t post a link, but you can search for texts and videos online if you wish to do so. It is completely counterintuitive, but it is true! So I started considering how would we deal with a Monty Hall Problem in the project management environment. Let us suppose, as an exercise, that you have five equally adequate companies that could carry the work you need to procure. You choose one, but your reasons are not really strong ones. All of them have the same credit risk, status in the industry and other indicators you might have looked at to make your decision. Now consider that, just before you take your decision to the executive board for a final recommendation, three of those companies have a financial setback, removing them from your potential list. You are now faced with two companies, the one you chose, and the other one. You have the power to switch decisions, and you still evaluate both companies the same way. Should you switch? Would you switch? You should do the switch, but probably you won’t. There are some strong assumptions behind the Monty Hall problem:
Put those things on hold now, let us examine the question in the pristine light of the numbers. Let us boringly call the companies A, B, C, D and E. Suppose you chose company A. The odds that you made the right choice are 1 out of 5, that is, 20%. Consider now that financial distress took place, and companies B, C and D got swallowed whole by the ruthless economic situation. All you have left are companies A and E. Seems like you got a fiftyfifty shot, one might say. However, since you have a single best answer, and you voided three that you did not choose a priori, the chances that you may win are still 20%, if you do not change your decision. Switching multiplies by four (reaching a massive 80%) your chances of winning this game. Most people do not contemplate this and switch, and keep their position. If you consider the mindset of the average person, it is completely understandable that no one would change their decision because of this turn of events. The psychological, it seems, gets in the way of the logical and mathematical reasoning. A takeaway for me, looking into this, is that you have to look at each situation individually and remember why you made that choice. If you are confident, go ahead with your decision. The numbers game is amusing, but it doesn’t apply, in my opinion, to the discrete and singular situations of a project. What about you? Ever been to this kind of situation? Did you switch or sustained your position? Let me know! Thank you! 
10 Wishes for your Project Risk Management adventures in 2019!
Hello, and happy new year, everybody! It’s time for new beginnings and, of course, wishes! I made a list for 2019, with what I wish for you this year, regarding project risk management:
I hope you had an amazing year in 2018 and I look forward to having an even richer experience in 2019! I’ve learned a lot, and that is one of the great things of being alive and sharing this sense of community with all of you. Thank you all so much! Have a wonderful 2019! 
The Curse of the Moving Mountain
Categories:
Bias,
Modeling,
Monte Carlo,
project risk management,
Quantitative Risk Analysis,
Schedule
Categories: Bias, Modeling, Monte Carlo, project risk management, Quantitative Risk Analysis, Schedule
When we undertake risk analyses, we are subject to our curses and nightmares. I would like to highlight one of them: the moving steep mountain! In general, our projects (at least the ones I’ve been working on) have some characteristics:
In the light of everything I listed, what do you (usually) do? You compress the schedule! You start doing crashing and fast tracking like crazy. And if you do a schedule risk analysis, you’ll see that the probability of meeting the dates tend to be very low. In addition, you are a victim (by your own doing!) of the merge bias! This was detailed by Mr. Hulett in his book “Practical Schedule Risks Analysis” (Gower, 2009). It happens when you have a lot of parallel paths that meet in a given task of your schedule. Suppose you have three tasks that take 5 days each in series and you “fast track” them into three tasks (of six days each) in parallel. Suppose uncertainty is a triangular distribution with the lower point at 70% of the base value and the upper point at 150% of the same value. The most likely value is the base value. When you simulate both cases, you end up with something like this: This simulation was done using @RISK. We can see that the probability of having a value lower than the planned one is over 25 percent for the original (series) project, whereas the “fast tracked” one (parallel) has a little over 5 percent for the same situation. The parallel paths hold a larger chance of failure, and the waterfall can accommodate a larger task with a shorter one in sequence. When we don’t consider the risk events in the simulation and we use small ranges on the variation of tasks, we end up with a very unlikely and steep distribution. That’s when the unfeasible schedule takes its toll: when the inevitable reality happens, the risks start occurring and the milestones are missed, and our planning becomes impossible. But never fear! The management has a solution for that as well! You shift the schedule and move the mountain a little to the right. And that small probability still remains, but it is less and less credible. Eventually the project will be completed, but what is risk management doing to bring value to the table? And the answer is… NOTHING! It would be much better to have a wide distribution considering events and broader dispersions, which we could slice into different regions and analyze for determinant factors. See below the comparison between the “moving mountains” and the “big hill”.
Let us go for the big hill, then! Let us embed the events in our analyses. Let us shed some light and free ourselves from the curse of the moving mountain and the habits that make management look like zombies. PS: This post was inspired, of course, by Halloween but, ironically enough, came to life a bit too late! Thank you for reading! Looking forward for your feedback! 
A Template for Setting Up Your Qualitative Risk Analyses
Hey everybody! just posting to let you know my first template upload is available on Projectmanagement.com. It is located here and it is a tool intended for us to conceive a risk register and structure a probability scale, an impact scale and an overall qualitative mapping. Begin by stablishing your probability scale. Very straightforward, and you can have as low as 5 and as much as 7 levels of probability to be assigned. Moving on to impact, I added some room for using different scales, so you can have a schedule dimension with four leves (say very low, low, medium and high) and a cost dimension with three (low, medium and high) and you can work these things together in the spreadsheet. In a severity sheet, you can see the probability x impact severities resultiing from the several dimensions of impact you used on the impact scale. As I mentioned on my article on the risk ruler, I am using an additive way for obtaining the scores, i.e., I am multiplying the probability of the event by the various impacts on each dimension. There is a sheet called register, with some columns to fill in the description and assess the probability and impacts, as well as planning for mitigation and its impact on the aforementioned components of the severity. There is also a possibility to plot the results in a graph. I hope you guys like it, your feedback is most welcome. And, again, thank you for reading and joining the discussion! 