I love reading Nassim Taleb’s books. I’m currently devouring Skin In The Game (Random House, 2018), where Taleb passes along this gem (among many): on the basalt slab that contains the Code of Hammurabi, it is written that if a builder builds a house for someone, and that house later collapses and kills its owner, then the builder shall be put to death. This brings a couple of things to mind:
- Although it’s next to impossible to imagine what life was like 3769 years ago, I think it’s reasonable to infer that the presence of this law on the basalt tablet that essentially contained the entire codex of civilization’s guidelines indicates that catastrophically poor house construction was a real problem. Seriously – consider the last time you perused a major university’s law library. Now think about the reduction of the entire collection to 282 laws, and the significance of this kill-the-poor-quality-house-builder law making that final cut.
- Whaddaya wanna bet house construction quality improved significantly after 1754 B.C.?
- Since we don’t go around killing the principals involved in failed residential construction projects, what has taken that practice’s place? That would be the Municipal building codes. But this implies that, in exchange for demonstrably attaining exacting quality standards set forth in local law, that the builders can’t be (legally) killed for subsequent fatal failures. So, the next time GTIM Nation Members have a contractor, say, make an addition to your house, and the local code inspectors appear to rubber-stamp approvals without rigorous inspections, it doesn’t mean your construction is of high quality. It just means you have limited legal recourse if the new structure collapses.
Meanwhile, Back In The Project Management World…
According to zdnet.com[i], 68% of Information Technology (IT) projects fail. And before the non-IT PM members of GTIM Nation roll their eyes and think “Yeah, well, that’s why they invented Agile/Scrum,” consider: virtually all of the management information you use to make the decisions that allow your projects to succeed is the product of one of the residual 32% of IT projects. Critical Path and Earned Value Management Systems don’t simply fall off the management information system tree into their consumers’ outstretched hands. Even if the existing systems appear to be sustainable, it doesn’t necessarily mean that they were successful. There must have been quite a few Babylonians who moved into seemingly sound houses who were ultimately disappointed (or dead) in order for the kill-the-builder law to have entered the Code of Hammurabi, after all.
So, how do we know if our Project Management Information Systems (PMISs) are of reliably quality? There are a bunch of software programs out there that purport to conduct a quality control check of Critical Path networks or EV systems’ output, and in some cases they do a pretty good job. I can’t say “most cases,” because I am not convinced that in most cases they perform a relevant function. Some of the systems that I find to be particularly disappointing perform functions such as checking for the number of start-to-start logical links among activities in a Critical Path network, and express the results as a percentage. For some reason, the conventional wisdom has averred that a percentage of such logic ties above a certain (rather low) threshold should be considered evidence of poor quality. Why? Because of the narrative that a high percentage of start-to-start logic links carries with it an enhanced possibility of returning artificial schedule variances.
Do I have to say it? This is just plain wrong, an attempt to slather on yet another layer of complexity in the name of PM maturity or quality. Clearly the primary quality test for any PM information system is whether or not the information system provided prior warning of a task at the reporting level coming in late or overrunning its budget. Now, had some researcher conducted an analysis of hundreds of projects that had actually come in late or overrun, and had traced a common pathology of their PMISs to artificially-generated positive schedule variances masking real negative ones until it was too late to efficiently correct[ii], then I would have no objections. But if that’s what happened, this researcher has not shown his work. Here’s the kicker: if this speculated fear were true, it would mean that those projects with “excessive” start-to-start logic ties would tend to report artificial negative variances, as those activities that could start with others actually didn’t. It’s analogous to the ancient Babylonian house appearing to be on the verge of collapse without ever actually losing structural integrity, and the law changing to “Whomsoever builds a house that the owner perceives is about to collapse, causing death, but never actually falls down or injures anyone, well, that builder should be put to death anyway.”
Well, enough of this “put to death” business. In its place, though, can we have the start-to-start logic ties Ringwraiths replaced with proponents of a real QC standard, like “did the PMIS accurately predict the overrun or delay?”
[i] Retrieved from https://www.zdnet.com/article/study-68-percent-of-it-projects-fail/ , which itself cited an article entitled The Impact of Business Requirements on the Success of Technology Projects; however, the link doesn’t appear to work, 9:39 MDT 27 May 2019.
[ii] Similar to the approach that the excellent David Christensen used when researching the stability of projects’ Cost Performance Index in his now famous study.