Hello again! Today I am covering what I think is a top five threat in a Schedule Risk Analysis or any simulation / numerical exercise: the destructive power of GIGO. It can send the whole team in a wild goose chase, or calm things down when your foot is halfway into the abyss.
For those of you who do not know or cannot remember, GIGO stands for Garbage-In-Garbage-Out. Computer models, especially those who rely heavily on assumptions and constraints to run, such as our simulation models, are prone to suffer from that factor. The term dates back to 1957, as far as we can trust Wikipedia for that, with a citation of a weapons specialist saying, “‘sloppily programmed’ inputs inevitably lead to incorrect outputs”. We can observe two main GIGO possibilities when we are simulating our schedules.
The first one relates to the model itself. That is, if you have a schedule which is faulty, incomplete and lacking the proper detail, no good can come of doing anything but... fixing it! This may be a structural problem and require a review of the WBS and even that very dangerous but necessary question: “What is really the purpose of this Project?” There are tools for detecting a bad schedule, but no straightforward tool for detecting a bad scope. I can easily detect tasks without successors or with hard date constraints, but I cannot, without a real understanding of the Project, state that the scope is ill detailed or the breakdown does not really make sense. It can be a tricky thing.
The second GIGO factor is the modeling of the simulation itself. Some people are mesmerized by the mere presence of a histogram or a tornado chart. They go: “Wow, so there is a 10 percent chance that we will meet the promised date. I’d better find another job”, or “No way in hell this is right, the modeling is all crooked”. This is why, in my opinion, we must not show any simulation results until the modeling is complete. Until we discussed the distributions, their upper and lower values or other ways to describe them and the risks events, we must hold our instinct to show those beautiful features of the Monte Carlo simulation. What can be worse than having a black box model that shoots numbers left and right? I will tell you: it is having a guided simulation, that is, someone saying, “The modeling doesn’t matter, but the figures for P10, P50 and P90 should be this and this and that”. This is the worst GIGO ever: a confirmation of what is expected just because it is... well, expected!
Maybe we can model things wrongly just because we lack the training or we are just doing it wrong. That is an honest mistake. However, we must be careful using a tool that powerful, especially when we are in a company with low maturity regarding risk.
I am including a small GIGO-avoiding checklist; feel free to add more items!
- Check the schedule before you do anything else;
- Have all assumptions and constraints formalized;
- Have some quick documentation on the distributions you are using and why, who gave that input, etc.;
- Do the same for the events you are modeling;
- Make sure whoever models and/or runs the simulation has experience with the software and the technique;
- Make sure someone else does a check on the simulation, specially looking for errors and strange results;
- Look into the Tornado chart and make sure these correlations, regressions or whatever index you use make sense;
- If you are using critical index, evaluate the connection between the values and the ones you observed on step 7;
- Prepare a “risk story” for this project: if you were to present it to someone, how would you go about it? Does it make sense for you?
- Double-check and validate with external sources, if possible, to avoid the unavoidable biases.
I hope those ten steps help reduce the GIGO issue. Do you have any more tips? Let me know! Thanks for reading!