Much as holiday get-togethers with family and friends will bring people with disparate ideas into close proximity, so, too, does the creation of a new project team bring together people with different views on how that project work should be managed. Since PMI® came into existence in 1969, project management has become a rather big deal (there are over 500,000 PMP®-certified people around the world). As in any field of endeavor that attracts a lot of adherents, many enthusiastic practitioners have embraced some PM-related ideas that are, shall we say, of marginal usefulness in the field of actual management science, much like Keynsians, while useful to politicians, really have little to add to the realm of macroeconomics.
Take Work Breakdown Structures, or WBSs. Even those with a fleeting knowledge of PM basics know that the development of a WBS is a critical early step in setting up the information systems needed to manage the project, and therein lies a problem: to admit to not knowing how a WBS works, or is properly set up, is to essentially reveal to the team that you don’t know the first thing about project management. So, in the event the genuine practitioner is assigned to a project team that has an obviously flawed WBS in place, how should that very difficult conversation take place?
Well, don’t do what I did. I told the head of the project’s Project Controls Team that his WBS was invalid, and would need to be reworked prior to any workable Critical Path or Earned Value system being put in place, much less a usable Basis of Estimate. He listened politely, thanked me for my input, and then had me taken off the project team. In my defense, I was in my 20s, and didn’t suffer fools gladly. From a management point of view, the project would end up being a fiasco – cost overruns and delays were rampant, as well as showing all the symptoms of a management style immersed in reactionary mode, flailing at every unexpected event. But I did try to have the conversation.
Then there are those discussions within organizations whose business model was set up by and is dominated by its accountants. Such organizations are readily identifiable by a reluctance to set up the project’s accounting system based on the aforementioned WBS. Accountants, being the lead soldiers of the asset managers’ army, are more likely to insist on setting up the chart of accounts to track charges by organization, or OBS. Of course, the inability to collect actual costs by the reporting level of the WBS essentially disables an Earned Value Management System, the very core of the PM’s ability to manage cost performance. I was once e-mailed by a military officer who was encountering this very problem, and asked me for advice as to how to overcome it. When I responded that there was no way he could set up a valid EVMS without the actual costs collected by WBS element, he actually became angry with me, as if I had some kind of an obligation to remedy his project team’s accountants problem.
Then there are those who believe that all the tasks on the critical path of a CPM network are innately important outside of schedule logic. Activities on a schedule network’s critical path are important because, if there is a delay in any of them, the whole project is likely impacted. However, I can’t count the number of times I’ve heard a Cost Account Manager say something along the lines of “this activity is too important to NOT be on the critical path!”, meaning that someone is going to have to have the difficult discussion about how activities on the critical path are there due to schedule logic, not priority or visibility, and asserting to the contrary is revealing a profound lack of basic PM understanding.
If everyone at a PMI® Congress is going to act politely to one another, perhaps these discussions should be avoided in casual discussions. However, if you are on a project team and one (or more) of these PM pathologies makes an appearance, it might be time for a difficult conversation.
In my seemingly endless attempts at stamping out poor management science scholarship, I have, on many occasions, encountered ideas that leave me with a case of epistemological heartburn. Many of these ideas have to do with commonly-accepted management axioms, both valid and not. For example, the adage “if it can’t be measured, it can’t be managed,” variously attributed to Deming or Drucker, came under fire in an article I read recently on a prominent business website. In this article by Liz Ryan, entitled ‘If You Can’t Measure It, You Can’t Manage It’: Not True (sic), dated February 10 of last year, one of the pull quotes is:
A typical ridiculous, unquestioned business adage is “If you can’t measure it, you can’t manage it.” That’s BS on the face of it, because the vast majority of important things we manage at work aren’t measurable, from the quality of our new hires to the confidence we instill in a fledgling manager.[i]
The rest of the article is similar in tone and content, meaning that it’s something of a Banjo Minnow[ii] to me. Besides the suspect management science assertions, one would think that someone in the chain of publishing command, from the author through the editorial staff, would have known that single quote marks are only used when quoting someone within an existing quotation, not to mention the inelegance of using the particular acronym she chose.
Hyperbolic writing styles and punctuation difficulties aside, let’s take a look at the central assertion, shall we? As I have oft asserted in this blog, my version of the Pareto Rule as it applies to Management Information Systems is that the 20% worst managers who have access to 80% of the information they need to obviate a given decision will consistently out-perform the 80th percentile best managers who have access to only 20% of the necessary information. Okay, so how does one obtain this information? It has to have three characteristics:
The second bullet pertains to the measurement and management axiom. For “the vast majority” of management decisions, some kind of accurate measurement is needed to avoid a bad, or even cataclysmic, call. A short list includes:
I could go on (and often do), but you see my point. To categorically downgrade entire information streams as male bovine droppings based on whether or not they can be accurately quantified is to plunge into management science alchemy, and with complete abandon. By attempting to conflate the quantifiable with the irrelevant, the article ends up making several invalid conclusions, and attempts to buttress them with bombast (and several amateurish cartoon drawings).
And therein lies my heartburn. Management science, as a field of scholarship, is already considered suspect by adherents to the hard sciences. To further a hypothesis with little more than primitive cartoons and hyperventilated prose just feeds in to the notion that MIS theory is just so much bloviating, made all the worse by a prominent brand name having published it. If the high-profile management publications are going to allow their authors to market in shoddy business theories, we should probably expect multiple series on risk management soon.
[i] Ryan, Liz, retrieved from Forbes, http://www.forbes.com/sites/lizryan/2014/02/10/if-you-cant-measure-it-you-cant-manage-it-is-bs/, November 21, 2015, 19:29 MST.
[ii] A “banjo minnow” is a fishing lure that was advertised as being irresistible to fish. In a demonstration, a banjo minnow was dangled in front of a bass in an aquarium, with a voice-over assuring viewers that the bass was not hungry. Nevertheless, the fish snapped up the lure.
I once worked with a large program office that had several, far-flung subordinate project offices. To keep up with the satellite office’s schedule performance, the home office had a software package that enabled the sites to send in updates pertaining to their attainment of key milestones. This program wasn’t based on Critical Path Methodology – instead, this “status” was in the form of assigning a category to the milestone: (1) completed, (2) expected on-time or early, (3) anticipated delay, or (4) anticipated major delay or out-and-out miss. Of course, this setup wasn’t a schedule performance measurement system at all, despite the way it was advertised. As my regular readers will readily recognize, this system’s design was essentially that of a poll: the performing organizations weren’t sending in performance data. They were, rather, transmitting their opinions on how they would perform, which is a very different animal.
An entirely predictable pattern emerged: at the fiscal year switchover, a new set of milestones (with their due dates) would be negotiated, and all of the milestones in the report would show an anticipated on-time delivery (2). Then, as these milestones’ due dates drew within a few months, their status would tend to change to anticipated delay (3), and, only within one or two months of the due date, the activities that were genuinely in trouble would reveal an anticipated major delay, or a complete miss (4). A legitimate schedule performance measurement system could have alerted the home office of the problems in time for them to help deal with them; instead, the satellite offices were in a position to obscure any issue that might make them look bad in comparison to the other sites, and the home office remained in the dark until their intervention couldn’t help anything. But, hey, the software package itself performed as expected!
It was pretty obvious to me what the problem with this system was, and so I, along with two associates from my organization, visited the site that had developed the software in the first place. There’s actually an old scheduler’s trick for assessing performance in environs where routinely collecting the data needed to keep a critical path method package going isn’t an option. All you need to do is to divide an activity’s cumulative duration by its estimated percent complete, and you have a fairly accurate estimate of its total duration. Compare this figure to the activity’s original duration, and you know within about ten points if the activity will finish early, on-time, or (gasp!) late, and by how much. Since this system already had the activities’ start dates, all it needed was the percent complete estimate (instead of the milestone categorization used), and it instantly became an effective (if basic) schedule performance analysis tool. The programmers we met with thanked us for the insight, but turned down cold our entreaties to update their software. The reason? “Because before this package was introduced, there was nothing, and, even if it is flawed, this is better than nothing.”
So, what we had was an instance of a piece of software that had been adequately tested and installed, but was still failing at its purported function, that of monitoring schedule performance across multiple project sites within a program. Why was it failing, despite passing its tests? Because it was predicated on a flawed business model. Polls are not performance measurement systems, period, even though they often masquerade as them. However, once in-place, it would prove to be very difficult (if not impossible) to replace it with any system that would represent a superior solution based on, essentially, the flawed sunk-cost argument.
So, before you test the software – indeed, before you code the software – do yourself and your customers a favor, and test the validity of the underlying business model. How can you do this? Well, first you…
Oops! Look at that, I’m out of space. If you can’t wait, check out the book that this blog is named after, available at http://www.ashgate.com/isbn/9781409442424.
Pretty simple question, right? What color is an orange? Since it’s such a simple question, and since I’m the one asking, my regular readers are perfectly justified in suspecting a trap. Can I, in 700 words, convince you of an answer different from the obvious one? We’ll see.
Due to minute variations in the way humans are made up chemically, biologists have discovered that virtually everybody experiences sensory inputs slightly differently. Salt probably tastes pretty much the same for all of us, but it’s a rare palate that can discern a 1926 Dom Perignon from a 1959 vintage. And even some experts in music can have a difficult time telling the difference between a Primavera (the violin maker) and a Stradivarius in the hands of a sufficiently talented player.
This phenomena extends to our perception of color, as well. Technically, what we perceive as “color” is actually the different wavelengths of radiation in what humans know as the visual spectrum (as opposed to whose perception? Stay with me.), situated in between near-infrared (on the low side) and near-ultraviolet. Humans have three types of color-receptive cones: green, blue, and red, the last of which enables us to see all the colors that are derived from red, such as violet, and, yes, orange. By contrast, butterflies have five types of receptor cones, which means that they see at least two more colors than we humans even have names for. Mantis shrimp have 16 different types of cones(1) .
Meanwhile, back in the Project Management world, roving bands of PM-themed writers, consultants, and bloggers prowl about the land, seeking to uncover project management practices that don’t meet their ideas of sufficiency. Where do these ideas of sufficiency come from? I would argue that, with few exceptions, they come predominantly from one source: experience. For those readers who would object by saying that education also comes in to play, I would argue that “education” is rarely more than others’ experience, communicated to and adapted by the writer/consultant/blogger. In the management sciences, theories that would otherwise overturn commonly shared experiences are almost never provable in an experimental setting. When we talk about project management best practices, it’s virtually always based on experience – our own, or others’ (whom we know about).
Okay, so if it’s a common experience that, just as the orange fruit is orange in color, any major project would be doing Project Management wrong if there were, say, no recurring “bottoms-up” estimate being performed, why is it problematic to point that out? Because it’s subjective, that’s why.
If the question as to whether or not an object is orange is mission-critical, then the appropriate response would be something like “Its wavelength is between 635 and 590 nanometers, which most people perceive as the color orange.” Similarly, if a writer, consultant, or auditor wants to level severe criticism against a project team for not executing the occasional “bottoms-up” estimate, the natural response should be “Why? Why is re-re-estimating the remaining work considered a valid analysis technique? Which projects have had success in doing that way, as opposed to the normal, calculated version?” But, since there is no valid research establishing that performing a “bottoms-up” estimate yields vital performance information that often changes the project team’s technical approach for the better, the one making the criticism remains mired in subjectivity. At that point, the argument turns on the differences in the participants’ experience. But, as we established earlier, our experiences are almost certainly subjective and unique – even those from virtually identical backgrounds can and do have significantly different takes on their shared experiences, up to and including causality.
So, what color is an orange? Well, I’ll concede it’s orange – if and only if disagreement doesn’t land me or my project team in the non-compliance penalty box. Otherwise, I’m going to have to insist on a spectrometer analysis…
(1)The Oatmeal, “Why the Mantis Shrimp is my New Favorite Animal,” retrieved from http://theoatmeal.com/comics/mantis_shrimp on November 5, 2015, 18:06 MST.
With the upcoming release of the movie Spectre, I thought I would, once again, employ my own Bond, James Bond-like skills, and infiltrate PMI®’s top-secret archive vault to review some of their most closely-guarded documents. The safe is behind the velvet Elvis painting in the President’s office, and the combination is still the factory setting. What I found inside was both amazing, and highly appropriate for dissemination among my readers.
In a red folder marked “Top Secret: The Evolution of Change Control,” I found some time-yellowed papers that appeared to be antique Baseline Change Proposals, or BCPs. Holding my little flashlight in my mouth, I snapped photos of them before putting them back. Here are some of the jaw-dropping highlights.
Contractor: H. Potter & Associates
So, I am now going to play the part of Wikileaks, and release these Baseline Change Proposal documents I photographed out of PMI®’s vaults as they become relevant. But I’d rather not move to Russia…