When the first scenes of the planet Coruscant appeared in the Star Wars movie The Phantom Menace (to you purists out there, the clip of Coruscant that appears near the end of The Return of the Jedi was added later), seasoned science-fiction consumers probably had a flashback to Isaac Asimov’s Foundation Trilogy, and the governing planet Trantor. Like Coruscant, Trantor has no open spaces – every acre has either been paved over or has a building on it. However, after the first Foundation galactic-wide government collapses, and almost everyone has abandoned the place, a remnant tear down some of the infrastructure, and begin farming the surface – something that Hari Seldon’s “psychohistory” algorithms failed to
I’ve mentioned The Foundation Trilogy and psychohistory previously, primarily because of its similarity to the philosophical underpinnings of modern risk management. Set thousands of years in the future, The Foundation Trilogy takes place in a galaxy populated by trillions of humans, scattered across myriad star systems. Psychohistory is predicated on the idea that, while it is impossible to predict the movement of a single atom of, say, oxygen, if you get billions and billions of O2 molecules into a container they start to behave in highly predictable ways. The Hari Seldon character has devised a method of calculating the most likely outcome of the collective Galactic Empire’s future, since there are so many people whose behavior, while individually unforeseeable, has become somewhat predictable in the aggregate. While we never actually see the algorithms that do all this amazingly accurate predicting, one gets the sense that it’s a combination of statistics, Game Theory, and Behaviorism, with turbocharged derivatives of Bayes’ Theorem thrown in for good measure.
Meanwhile, Back In The Project Management
I recently attended a conference on predictive analytics. As with most conferences, it included keynote speeches, presentations, and workshops, with the standard exhibitors’ area set aside for vendors and their booths. One element of the conference that I did note, over and over, was a heavy emphasis on analyses based on Gaussian curves, all of which (well, all the ones I saw) were presenting ideas for better predicting the behavior of large populations based on analyzing the data from samples. For recent additions to Game Theory In Management Nation, I regularly assert that all valid Management Information Systems’ architecture can be reduced to three sequential steps:
- Data is gathered based on some discipline or planned sampling program.
- The data is processed into information, based on some methodology or model (for PM types, this is usually Earned Value or Critical Path methodologies).
- The information is transmitted to decision-makers in a way that they can readily understand it, and use it to make decisions.
I noticed an interesting consistency among those presenting papers and the software vendors. Those presenting the papers largely addressed the issues of Step 1, collecting the data, with only an occasional mention of the problems inherent in the actual analyses. Similarly, the vendors seemed to be offering up solutions to data collection and standardization issues (“harmonizing” the data), but would usually punt when asked how they would propose actually processing the data into usable information, the all-important Step 2. One fellow told me that his software would simply use “whatever the customer’s preferred model” happened to be at the time. To be fair, Hari Seldon won’t introduce Psychohistory for another 10,000 years.
Until then, the whole business of reliably predicting the future is pretty much stuck in reverse. Analysts can observe what has happened in the past, overlay some sort of structure on to it, and then recommend a specific strategy if the present appears to be unfolding in an analogous fashion, but that’s about it. And even that series of steps can be accurately, if colloquially, chalked up to “experience.”
But perhaps the main intellectual fault of the whole predictive analytics/ risk management codex has to do with the extrapolation of large populations’ behavior(s) from a relatively small sample size. As marginal as that practice is, current risk management seeks to turn that on its head. They attempt to predict the cost or schedule behavior of a specific project based on patterns gleaned from many other (hopefully, similar) projects, something that not even Hari Seldon proposed (will propose?) as plausible. By definition, projects tend to be one-time endeavors. Even identical buildings are constructed on different sites, after all. Paradoxically, the more standardized or routine the work to be done happens to be, the less demand for professional Project Managers. Stamping out identical widgets from a production line doesn’t need much PM support, while facilities or software applications that do something genuinely unique usually won’t succeed without it.
Meanwhile, Back On Trantor (Warning! Spoilers Ahead)…
The fate of the First Galactic Empire turned out to be very different from its calculated version due to the emergence of a character named The Mule, who had a mutant ability to influence people with whom he interacted without actually speaking to them. In other words, the prediction of the population’s future unraveled due to the introduction of a single, unpredictable person. But predictive analytics/ risk managers don’t seek to forecast the future of projects in general – they attempt to quantify what is going to happen to singular, specific projects.
My prediction? It won’t work 10,000 years from now, and it certainly won’t work in your next project.