In 1962, Thomas Kuhn wrote The Structure of Scientific Revolutions, a truly insightful book on science philosophy, and the source of the term “paradigm shift.” Kuhn theorized that, while scientific advances appear to assume a steady upward trajectory, the truth is that science advances in fits and starts, and tends to assume a certain pattern that reflects somewhat poorly on human nature.
This pattern roughly follows the following steps:
· New data is discovered that challenges the currently-accepted theory.
· Cycles and epicycles – essentially, addendum – are added to the current theory in an attempt to accommodate the new, challenging data.
· Someone comes up with a new theory, which tends to explain the new data better than the old theory.
· This someone is subjected to criticism, often very aggressive (if not out-and-out abusive).
· As more and more data is made available, the newer theory begins to attract advocates, and defenders of the old theory become less vocal, or switch sides altogether.
· Once a preponderance of self-proclaimed experts has accepted the new theory, the notorious paradigm shift has occurred, and the new theory becomes the most widely-accepted one – until new discoveries uncover facts that challenge it, and the cycle begins again.
Of course, Kuhn was writing about the hard sciences – his primary examples included cosmology – but I believe that his insights have bearing in the world of the management sciences. Yes, I know that real scientists scoff or cringe at the term “management science,” and they probably have good reason to do so. After all, when one of them comes up with a theory, say, of the best solvent to extract aminobenzoate, she can verify and repeat her theory’s application in an experimental setting. While project management aficionados can point to one project disaster after another when leaders eschew basic PM techniques, such as the creation of scope, cost, and schedule baselines, there are far too many other factors in play. The necessary conditions to test for just baseline utilization are far too difficult to isolate in our laboratory, since our laboratory is the world of business, which is to say the free enterprise marketplace. And, just to make the experimental verification of management science theories even more difficult, elements of topics ranging from organizational behavior and performance to management information system architecture come in to play, resulting in a hopelessly complex, if not out-and-out chaotic, experimental environment.
So, what happens when someone has an idea, a hypothesis that they genuinely believe is a useful managerial insight, but runs counter to existing, widely-accepted practice? Well, academics and real scientists publish their ideas in peer-reviewed journals, hoping to persuade their colleagues to at least evaluate the efficacy of the new theory, while documenting the experimental data that supports the new idea. So-called management scientists tend to be either primarily managers themselves, or business owners (academics belong to a separate category). These peoples’ insights are usually the only thing that keeps their organizations competitive, because, if they do have a genuinely paradigm-shifting insight on how to manage much more effectively or efficiently, they actually have a disincentive to have that idea widely-disseminated. It’s far better for them to try out their idea in the business world, to see if it attracts profits or accomplishes scope objectives at a greater rate than the others’ ideas on how a company or project ought to be run. If it works, they will stick with it. If it doesn’t, they will abandon it. In either instance, shouting the idea from the published management sciences rooftops is contra-indicated (to borrow a real scientific term).
In fact, if a large project were to be concluded successfully while indulging some management pathologies, its principals would be rewarded if they could make the case that that success was because of those suspect tactics. Kind of makes me wonder about those who are eager to “transfer” their “knowledge.”
Ultimately, as a project management practitioner, if you are privy to truly usable knowledge, knowledge that makes your organization or project team better, you might want to think twice before you transfer it, and to whom.
In evaluating the whole concept of knowledge transfer, it’s sometimes easy to become frustrated with the amount of energy devoted to analyzing the topic. I mean, seriously, isn’t “knowledge transfer” what happens when people engage in conversation? How hard can it be? Well, recall the classic scene from Monty Python and the Holy Grail, where Michael Palin’s King character is trying to tell two guards to “Stay here and make sure he (meaning the prince standing four feet away) doesn’t leave.”? The guards, who appear otherwise normal, can’t understand the simple command and, in the movie at least, hilarity ensues. It can be that tough.
So, in the analysis phase communications experts will usually invoke some truisms, such as it’s the responsibility of the communicator to make the message clear to the receivers, and not the responsibility of the receivers to take in the transmitter’s intended true meaning. Generally speaking, that’s true, but the exceptions to the rule(s) are the dangerous elements here.
For example, back when I was writing the Variance Threshold column for PMNetwork magazine, I had an incident where a fellow wrote to my e-mail address taking me to task for a piece I had written on critical path scheduling, where I discussed variables to consider when deciding how to handle schedule float. This person wrote in a rather condescending tone, challenging my analysis on schedule float based on his assertion that float did not exist. I swear I am not making this up. At first I assumed that nobody who described themselves as a “senior scheduler,” as this person did, could be that ignorant, and therefore took his meaning as being akin to some of Eliyahu Goldratt’s work in the novel The Critical Chain. In this – again – novel, the project manager protagonist transfers personnel from non-critical path activities to critical ones, thereby shortening the project’s overall duration. Of course, since the idea was being presented in a fictional venue, the tactic succeeded fantastically, the protagonists lived happily ever after, no character ever steps up and says “Hey! Isn’t that tactic identical to the old ‘crashing—the – schedule’ trick?”, and the body of project management theory had been altered (and not necessarily for the better).
Graciously assuming my provocateur was referring to this bit of shoddy management science, I responded with the reasons I thought “critical chain” was not valid. The e-mail writer wrote again, stating flatly that that wasn’t what he was talking about. Rather, he clarified that schedule float did not exist because (I’m paraphrasing), when a manager told a project team member to do something, well, then, by golly, that thing had to be done then. Clearly, this fellow was resistant to knowledge transfer, at least when it came to CPM or came from me. I was tempted to respond with something along the lines of “Stay here, and make sure he doesn’t leave,” but thought the better of it. I’ve had similar e-mail interactions with those who took exception to my criticizing risk management techniques, some of which were positively hysterical.
I think the common thread here is that, once a person creates an emotional attachment to something otherwise as innocuous as a project management theory, hypothesis, idea, technique or tactic, that facts and insight – knowledge – suddenly have a much harder time of being transferred into that person’s thinking. This may be due to (ironically enough) a derivative of the sunk-costs argument, where, say, a professional risk analyst assigned to a non-pharmaceutical project looks back on those semesters of statistics classes suffered through (strikethrough) taken and passed, and can’t understand why the Earned Value and Critical Path information streams are so much more valuable than his Monte Carlo simulations. Frustration sets in, and the next thing you know mildly unhinged missives start showing up in some columnist’s in-box.
In any event, evaluations of knowledge transfer are usually predicated on the notion that there exists persons willing and able to receive the knowledge, once proffered. I’m not sure that’s always the case.
One of my business school professors – I can’t recall if it was in Organizational Behavior and Performance, or Information Technology – once offered up an axiom, that knowledge and information transfer was a two-way street, and if attempts to squelch it were made, then such communications would manifest in unexpected ways. I accepted his assertion on its face, without really understanding what the implications were. I didn’t have long to wait to find out.
I was working at the time for a medium-sized contractor, whose executive leadership indulged in a certain elitism that insulated them from any criticism from the lower-ranking members of the company, no matter how accurate or sincerely intended. This elitism was a source of frustration from the company’s workers, who largely felt that they were in a position to help the organization achieve its (stated) corporate goals, if they could only be heard. The first part of my professor’s axiom was in place – the executives’ arrogance represented a tamping-down effect on upward communications.
This company printed a little (around 8 pages) monthly newsletter, entitled the XYZ News (“XYZ” being the stand-ins for the company’s actual acronym). This newsletter represented the downward-trending communications, the bits of knowledge and information that these executives wanted the rest of the organization to know. It contained stories about project success, proposal wins, upcoming events, and some vignettes about the upper-level managers or owner. Access to information about project failures, proposal losses, stories about non-execs, or any other data item that might put the company in a bad light was banned from the official newsletter, and limited to rumor or hearsay.
Along about this time the knowledge transfer function dramatically broke free of the executives’ attempts at managing it. An underground newsletter sprang up that mimicked the official company’s newsletter, font, banner and all. The entries in the faux newsletter, however, did focus on those significant events going on in the organization that the executives did not want the rank-and-file to know, and the writing style threw in enough snark to amuse the workers and absolutely infuriate the leaders. Immediately an effort was launched to discover the producer of the imposter. People’s computers, hard drives, printers, and desks were searched, but the phantom publisher was never discovered.
Several issues of the fake newsletter were published prior to the time I left this company, and at least a couple were printed after my departure, or at least that’s what I heard from my friends and former co-workers. The company ended up being sold to a competitor within the year, at least in part due to the out-of-touch decisions being made by the executives.
What I found fascinating about the whole affair was the way that my professor’s assertion had come so dramatically true. The sense of superiority that these executives had indulged was largely detached from any true meritocracy, and was based instead on a pervasive cronyism. However, since these people were the bosses, it was really quite impossible to challenge any of their decisions, no matter how ill-advised.
It has been my experience that whenever the topic of knowledge transfer is broached, an automatic assumption is in place, that such transfers necessarily involve communicating the precious knowledge from the more august, learned, and, well, knowledgeable people – invariably highly-placed within the organization – to those in need of such knowledge, usually the lower-ranked ones. But, at least in this case, the necessary knowledge needed to go in the exact opposite direction, and did so, albeit not in a form those who needed it were willing to accept. They were only okay with sending “knowledge” down the hierarchy.
So, in this case, the knowledge transfer went down, when it needed to come up.
In the classic Star Trek episode “The Paradise Syndrome,” Captain Kirk finds himself on a planet inhabited by people who practice the customs of North American native tribes but which, unfortunately, is about to be struck by a huge asteroid. An obelisk atop a structure (that clearly could not have been constructed by the natives) nearby is more than just an anomaly – it’s a device that an advanced civilization (“Preservers”) had put there untold years prior to deflect just such asteroids. However, the tribe’s medicine man, Salish, has no idea how to get inside the structure and invoke its protective capabilities. A stranded Kirk asks Miramanee, his love interest and future wife (a lot of people forget that, for much of the third season, James T. Kirk was actually a widower) how did it come about that Salish never received instruction on how to access the “temple,” and she replies that Salish’s father did not want to share the information too early, and died unexpectedly, leaving them in their current precarious situation.
Which brings us to ProjectManagement.com’s March theme, knowledge transfer and management. It’s axiomatic that information is the life-blood of any organization, particularly corporations and companies, and especially within project teams. It’s simply human nature to preserve that which makes a project team member valuable to the project team, and, in most instances, that’s some unique knowledge or capability. When everybody knows how to, say, operate Critical Path Methodology software effectively, then the price of CPM specialists drops precipitously. This is true of virtually any job that entails specialized knowledge, or advanced capability (which usually comes about because of specialized knowledge).
So, once again we have an instance of the best theoretical approach to a project management problem running smack-dab into legacy elements of human nature. The more heavy-handed amongst us will tend to demand policies that appear to combat such natural tendencies, while the more passive consultants will engage in (yet another) round of eat-your-peas-style hectoring, about how people ought to mentor, or engage stakeholders, or document their insights, blah blah blah.
My take, predictably enough, is a bit different. It’s based on the now-clichéd observations of how project teams become high-performers, specifically the Forming-Storming-Norming-Performing steps attributed to Bruce Tuckman in 1965. Newly-formed project teams that move into the Storming phase of this pattern can be expected to be comprised of individuals who are disinclined to do any knowledge sharing or transfer. Why? Because they don’t know where they fall within the team’s hierarchy. Offering up any insights that may be unique to them would make those insights suddenly un-unique, and the team member possessing them less valuable in a comparative sense. This reluctance is compounded if members of the team believe that others are competing with them for status within the group – the competitive sense within the group essentially destroys opportunities for cooperation, since the team’s members are attempting to establish their relative values to team leadership.
So, how does the project manager help reduce or eliminate inter-team rivalries, so as to shorten the Storming phase as much as possible? A few tactics are clearly indicated:
· Prohibit ex parte discussions about team members. This is a common, but destructive tactic used by those attempting to curry favor with management at the expense of the other team members.
· Make clear the precise ranking of each member of the project team, and how that ranking can be advanced. The answer to the second part of the previous sentence is always by helping the project team attain its objectives, and never about the individual moving up with respect to peers.
· Finally, try to transfer the competitive instinct away from issues interior to the team, and on to the way things are unfolding exterior to the group, i.e. the team’s competitors.
I’ll discuss more advanced and nuanced methods in other March blogs. For now, though, if you find yourself on a planet inhabited by North American natives, about to be smashed by an asteroid, make sure Salish knows that slashing your hand open with a knife doesn’t help deflect the planet-crushing asteroid one little bit, and he needs to knock off the whole extending-the-Storming cycle gig.
As February enters into its final week, so, too, does my blog enter into its last opportunity to gratuitously disagree with my other ProjectManagement.com columnists and bloggers about the whole quality management thing. But it’s this very realization, that I’m coming up against a limit to my contrarianism (well, about this topic, anyway) that helps highlight my overall objection to the quality management guys: they don’t know when to quit!
Consider this definition of quality management from Investopedia:
The act of overseeing all activities and tasks needed to maintain a desired level of excellence.[i]
For those who are not themselves quality consultants and did not blanch upon reading the three-word term “overseeing all activities,” go back and read it again, please. The added “…needed to maintain a desired level of excellence” is somewhat perfunctory. Seriously, what aspect of the management world isn’t part of “maintaining a desired level of excellence”? This definition of quality management, all by itself, has outed the quality aficionados as either (a) being unable or unwilling to state the epistemological limits of their discipline, or (b) really intending to take over the management science world. In these two respects, the QC guys are eerily similar to one of my other favorite management science overreaching targets, the risk managers. Also like the risk managers, the QC guys rely heavily on statistical analysis to support their conclusions and recommendations. However, unlike the risk management-types, the quality gurus not only attempt to capture the impacts of their recommended changes, they often insist on it – which brings me to another disagreement I have with them. It’s really impossible to completely quantify the overall economic impact of altering the quality of a given product or service. Again as with the risk managers, there are simply too many factors to capture, much less evaluate.
But let’s return to the overreaching part. Suppose you, dear reader, hire me as a quality control consultant for, say, your yacht company. My first day there, I notice that the machine that your company is using to mold ship parts is set slightly below the recommended temperature for the type of plastic resin being supplied. I ask you about it, and you tell me that, due to the price of energy in California, your accountant (“Melvin”) performed an analysis that showed that a lowering of the molding furnaces by 10 degrees Fahrenheit would save $21,258.52 per month. However, a cursory reading of the supplier’s material data sheet indicates that, if the molding machine is not kept above the recommended temperatures, the resulting parts will experience a cohesion degradation of 24%, and that, when the $21,258.52 per month “savings” are spread out over the number of vessels being constructed, it ends up being a mere 1.2% savings per ship. Because I’m aware Melvin was opposed to your hiring a quality consultant in the first place, I’ll put the question to you this way: Are you sure you want to sacrifice hull integrity by 24% for the savings of 1.2% of the cost?
When you reply with the obvious answer, I continue: Why did you hire Melvin in the first place?” Surprised at my audacity, you politely invite me to confine my analysis to matters of quality. My response: the very definition of quality management is to oversee all activities needed to maintain a desired level of excellence, and your company’s ships have suffered from a significant drop in quality because somebody hired Melvin here. Clearly, your human resource activities fall within my purview. Also, I have some questions about how you raise your kids…
Okay, that last bit is clearly out of bounds. But that’s my point – so was the challenging of the decision to hire Melvin, even though, based on the definition quality management, the challenge was perfectly within the quality management consultant’s purview. Without a clearly articulable upper limit on which decisions may be challenged or overturned under the guise of what’s “needed to maintain a desired level of excellence,” overreach is virtually guaranteed.
It’s analogous to ProjectManagement.com bloggers telling their readers how to raise their kids.
[i] Retrieved from Investopedia, http://www.investopedia.com/terms/q/quality-management.asp, February 21, 2015, 13:25 MST.