I was wondering about the internet and I found yet another discussion about the Monty Hall problem. For those who are not following, a quick recap. There is a game show where you can choose between three doors. One has a prize, the others do not. After you make your choice, the host opens a door (without the prize, of course) and says that you have the opportunity to make a move, or maintain your position. Although most people thing that changing does not improve your odds, there is probability proof that changing actually doubles your chances! I won’t post a link, but you can search for texts and videos online if you wish to do so.
It is completely counterintuitive, but it is true! So I started considering how would we deal with a Monty Hall Problem in the project management environment. Let us suppose, as an exercise, that you have five equally adequate companies that could carry the work you need to procure. You choose one, but your reasons are not really strong ones. All of them have the same credit risk, status in the industry and other indicators you might have looked at to make your decision.
Now consider that, just before you take your decision to the executive board for a final recommendation, three of those companies have a financial setback, removing them from your potential list. You are now faced with two companies, the one you chose, and the other one. You have the power to switch decisions, and you still evaluate both companies the same way. Should you switch? Would you switch?
You should do the switch, but probably you won’t. There are some strong assumptions behind the Monty Hall problem:
Put those things on hold now, let us examine the question in the pristine light of the numbers. Let us boringly call the companies A, B, C, D and E. Suppose you chose company A. The odds that you made the right choice are 1 out of 5, that is, 20%.
Consider now that financial distress took place, and companies B, C and D got swallowed whole by the ruthless economic situation. All you have left are companies A and E. Seems like you got a fifty-fifty shot, one might say. However, since you have a single best answer, and you voided three that you did not choose a priori, the chances that you may win are still 20%, if you do not change your decision. Switching multiplies by four (reaching a massive 80%) your chances of winning this game. Most people do not contemplate this and switch, and keep their position.
If you consider the mindset of the average person, it is completely understandable that no one would change their decision because of this turn of events. The psychological, it seems, gets in the way of the logical and mathematical reasoning.
A takeaway for me, looking into this, is that you have to look at each situation individually and remember why you made that choice. If you are confident, go ahead with your decision. The numbers game is amusing, but it doesn’t apply, in my opinion, to the discrete and singular situations of a project.
What about you? Ever been to this kind of situation? Did you switch or sustained your position? Let me know! Thank you!
When we undertake risk analyses, we are subject to our curses and nightmares. I would like to highlight one of them: the moving steep mountain! In general, our projects (at least the ones I’ve been working on) have some characteristics:
In the light of everything I listed, what do you (usually) do? You compress the schedule! You start doing crashing and fast tracking like crazy. And if you do a schedule risk analysis, you’ll see that the probability of meeting the dates tend to be very low.
In addition, you are a victim (by your own doing!) of the merge bias! This was detailed by Mr. Hulett in his book “Practical Schedule Risks Analysis” (Gower, 2009). It happens when you have a lot of parallel paths that meet in a given task of your schedule.
Suppose you have three tasks that take 5 days each in series and you “fast track” them into three tasks (of six days each) in parallel. Suppose uncertainty is a triangular distribution with the lower point at 70% of the base value and the upper point at 150% of the same value. The most likely value is the base value. When you simulate both cases, you end up with something like this:
This simulation was done using @RISK. We can see that the probability of having a value lower than the planned one is over 25 percent for the original (series) project, whereas the “fast tracked” one (parallel) has a little over 5 percent for the same situation. The parallel paths hold a larger chance of failure, and the waterfall can accommodate a larger task with a shorter one in sequence.
When we don’t consider the risk events in the simulation and we use small ranges on the variation of tasks, we end up with a very unlikely and steep distribution. That’s when the unfeasible schedule takes its toll: when the inevitable reality happens, the risks start occurring and the milestones are missed, and our planning becomes impossible.
But never fear! The management has a solution for that as well! You shift the schedule and move the mountain a little to the right. And that small probability still remains, but it is less and less credible. Eventually the project will be completed, but what is risk management doing to bring value to the table? And the answer is… NOTHING!
It would be much better to have a wide distribution considering events and broader dispersions, which we could slice into different regions and analyze for determinant factors. See below the comparison between the “moving mountains” and the “big hill”.
Let us go for the big hill, then! Let us embed the events in our analyses. Let us shed some light and free ourselves from the curse of the moving mountain and the habits that make management look like zombies.
PS: This post was inspired, of course, by Halloween but, ironically enough, came to life a bit too late! Thank you for reading! Looking forward for your feedback!
Hello again! Today I am covering what I think is a top five threat in a Schedule Risk Analysis or any simulation / numerical exercise: the destructive power of GIGO. It can send the whole team in a wild goose chase, or calm things down when your foot is halfway into the abyss.
For those of you who do not know or cannot remember, GIGO stands for Garbage-In-Garbage-Out. Computer models, especially those who rely heavily on assumptions and constraints to run, such as our simulation models, are prone to suffer from that factor. The term dates back to 1957, as far as we can trust Wikipedia for that, with a citation of a weapons specialist saying, “‘sloppily programmed’ inputs inevitably lead to incorrect outputs”. We can observe two main GIGO possibilities when we are simulating our schedules.
The first one relates to the model itself. That is, if you have a schedule which is faulty, incomplete and lacking the proper detail, no good can come of doing anything but... fixing it! This may be a structural problem and require a review of the WBS and even that very dangerous but necessary question: “What is really the purpose of this Project?” There are tools for detecting a bad schedule, but no straightforward tool for detecting a bad scope. I can easily detect tasks without successors or with hard date constraints, but I cannot, without a real understanding of the Project, state that the scope is ill detailed or the breakdown does not really make sense. It can be a tricky thing.
The second GIGO factor is the modeling of the simulation itself. Some people are mesmerized by the mere presence of a histogram or a tornado chart. They go: “Wow, so there is a 10 percent chance that we will meet the promised date. I’d better find another job”, or “No way in hell this is right, the modeling is all crooked”. This is why, in my opinion, we must not show any simulation results until the modeling is complete. Until we discussed the distributions, their upper and lower values or other ways to describe them and the risks events, we must hold our instinct to show those beautiful features of the Monte Carlo simulation. What can be worse than having a black box model that shoots numbers left and right? I will tell you: it is having a guided simulation, that is, someone saying, “The modeling doesn’t matter, but the figures for P10, P50 and P90 should be this and this and that”. This is the worst GIGO ever: a confirmation of what is expected just because it is... well, expected!
Maybe we can model things wrongly just because we lack the training or we are just doing it wrong. That is an honest mistake. However, we must be careful using a tool that powerful, especially when we are in a company with low maturity regarding risk.
I am including a small GIGO-avoiding checklist; feel free to add more items!
I hope those ten steps help reduce the GIGO issue. Do you have any more tips? Let me know! Thanks for reading!
Is that expression common for you? Have you ever heard it like that? I have not and I guess you haven't either. Risk is seen as a negative matter, as a downside, as a hurdle, as a problem to be dealt with. Let us look this question a little closer on this post.
We are constantly warned of the risks in the world. It is something to be careful about. It is something to fear, to work around, to avoid.
Whenever I participate in a risk workshop, somebody says "please, don't forget the opportunities!" In fact, PMBoK states that "project risk management aims to exploit or enhance positive risks (opportunities) while avoiding or mitigating negative risks (threats). A critical success factor for the "Identify Risks" process is the "explicit identification of opportunities" also stating that "the identify risks process should ensure opportunities are properly considered". This is in the Practice Standard for Risk Management, page 38. Nobody needs to tell us to "explicitly identify threats". It is engraved deep into our minds...
Therefore, we need to take a chance, go forward and, well, live a little! I go back to portfolio optimization concepts, when we stablish the efficient frontier for a set of investments, combining risk and return. Of course we should take risks with caution and we should assume risks only should they provide further return for us. The admittance of risk brings return as I said before.
So why can't we see the bright side of life when we are identifying risks? I got one wild guess here.
Could it be possible that we enhance our business case so much when we are trying to get the green light from the board that we already start planning a semi-impossible project and already absorbed the opportunities in the base case? Or are we under so much pressure that we do not allow for things to be even slightly better? It is something to be considered.
Whenever possible, we try to monetize things and bring the decision to a single indicator, the Net Present Value of the Cash Flows, the Internal Rate of Return or another one, but we should always consider the other dimensions. Safety, Social Implications, Governance, Health, Environment, and such.
What I take from my quantitative risk analyses in the last decade or so is that when we consider the risk events, uncertainties, imprecisions and such, we end up so far from the baseline and the agreed upon plan, it is almost a lost battle before it begins! Distributions are always skewed to the downside, reflecting the tendency for things to cost more, to take more time, to use more resources, etc.
Preparing a more feasible business case and have It thoroughly analyzed by a third party with no links to the project area seems like a good idea, but is it actually done by most companies, or people? Or we are just “hoping for the best but expecting the worst”, as Alphaville would say?
No matter what, we should always try to have a realistic point of view of our project, and adjust our plan to match the risky side of life. There is nothing wrong with having a challenging date, but not an impossible one. And make some room for opportunities, for Pete’s sake!
What do you think? Join the discussion! Leave your comment below and I’ll reply. Thank you for reading!