As February enters into its final week, so, too, does my blog enter into its last opportunity to gratuitously disagree with my other ProjectManagement.com columnists and bloggers about the whole quality management thing. But it’s this very realization, that I’m coming up against a limit to my contrarianism (well, about this topic, anyway) that helps highlight my overall objection to the quality management guys: they don’t know when to quit!
Consider this definition of quality management from Investopedia:
The act of overseeing all activities and tasks needed to maintain a desired level of excellence.[i]
For those who are not themselves quality consultants and did not blanch upon reading the three-word term “overseeing all activities,” go back and read it again, please. The added “…needed to maintain a desired level of excellence” is somewhat perfunctory. Seriously, what aspect of the management world isn’t part of “maintaining a desired level of excellence”? This definition of quality management, all by itself, has outed the quality aficionados as either (a) being unable or unwilling to state the epistemological limits of their discipline, or (b) really intending to take over the management science world. In these two respects, the QC guys are eerily similar to one of my other favorite management science overreaching targets, the risk managers. Also like the risk managers, the QC guys rely heavily on statistical analysis to support their conclusions and recommendations. However, unlike the risk management-types, the quality gurus not only attempt to capture the impacts of their recommended changes, they often insist on it – which brings me to another disagreement I have with them. It’s really impossible to completely quantify the overall economic impact of altering the quality of a given product or service. Again as with the risk managers, there are simply too many factors to capture, much less evaluate.
But let’s return to the overreaching part. Suppose you, dear reader, hire me as a quality control consultant for, say, your yacht company. My first day there, I notice that the machine that your company is using to mold ship parts is set slightly below the recommended temperature for the type of plastic resin being supplied. I ask you about it, and you tell me that, due to the price of energy in California, your accountant (“Melvin”) performed an analysis that showed that a lowering of the molding furnaces by 10 degrees Fahrenheit would save $21,258.52 per month. However, a cursory reading of the supplier’s material data sheet indicates that, if the molding machine is not kept above the recommended temperatures, the resulting parts will experience a cohesion degradation of 24%, and that, when the $21,258.52 per month “savings” are spread out over the number of vessels being constructed, it ends up being a mere 1.2% savings per ship. Because I’m aware Melvin was opposed to your hiring a quality consultant in the first place, I’ll put the question to you this way: Are you sure you want to sacrifice hull integrity by 24% for the savings of 1.2% of the cost?
When you reply with the obvious answer, I continue: Why did you hire Melvin in the first place?” Surprised at my audacity, you politely invite me to confine my analysis to matters of quality. My response: the very definition of quality management is to oversee all activities needed to maintain a desired level of excellence, and your company’s ships have suffered from a significant drop in quality because somebody hired Melvin here. Clearly, your human resource activities fall within my purview. Also, I have some questions about how you raise your kids…
Okay, that last bit is clearly out of bounds. But that’s my point – so was the challenging of the decision to hire Melvin, even though, based on the definition quality management, the challenge was perfectly within the quality management consultant’s purview. Without a clearly articulable upper limit on which decisions may be challenged or overturned under the guise of what’s “needed to maintain a desired level of excellence,” overreach is virtually guaranteed.
It’s analogous to ProjectManagement.com bloggers telling their readers how to raise their kids.
[i] Retrieved from Investopedia, http://www.investopedia.com/terms/q/quality-management.asp, February 21, 2015, 13:25 MST.
There’s a story that at the onset of Operation Desert Shield, the United States’ and her allies’ effort to kick Saddam Hussein out of Kuwait, that the Pentagon had sent out a clue that something big was about to happen by inundating the nearby pizza delivery restaurants with so many orders that it was clear that the employees were going to be there through at least dinner time.
As I’ve discussed previously, project management information systems that deal with actual project performance – Earned Value and Critical Path methodologies-based, all others being fake – have two overarching functions: (1) to put into the hands of the project teams’ decision-makers the information they need to select the best strategies, and (2) to provide a narrative after-the-fact as to what happened, and why, as the project pursued its objectives. The first of these functions can be established relatively quickly and easily since (as I have also discussed before) both Earned Value and Critical Path can provide essential performance information without many of the formalities often associated with them, such as highly detailed cost estimates or intricately linked schedule baselines. The providing-the-narrative function, however, does need this level of formality and detail. But, like the Pentagon swamping local area pizza restaurants, there are going to be times that the prescient project manager does not want the particulars of her project’s narrative known to others, even if they do assert a claim to being a “stakeholder.”
The reasons for this are many. The particular project may have national security issues, or may involve the use of techniques or technologies that represent trade secrets. It may be as simple as the project team’s organization not wanting to share its insights on how to execute projects with its competitors. Whatever the reason, the very project management information systems that constitute the life-blood of informed decision-making can also be used against the owning organization and, in my humble opinion, this represents a quality issue.
So what we have with respect to assessing the quality of project management information systems are two essential, but potentially overlooked, components: the level of difficulty involved in establishing the information stream, and the potential for mis-use of that very information once it becomes available. In each instance, quality control plays a central role, and it’s not always positive. For example, the notion that a project’s cost baseline must be considered to be of poor quality if it does not take into account the most recent rates for labor, equipment, or any other line-item in the basis of estimate (BoE), is simply false. Perfectly usable and effective cost baselines can be (and are) created with budget estimates rather than detailed estimates, and those insisting that only detailed estimates will do are misguided. Depending on the project’s size and complexity, putting in the extra effort to create a detailed estimate (and it’s not cheap) will often lead to the attempt to derive other, irrelevant information from the cost baseline, such as the difference in price between the individual elements from the BoE and the same-time period actual costs. The fact that comparing budgets to actuals is useless is axiomatic among the better cost engineers, and this fact does not change simply because the comparison is occurring at a far more detailed level.
I’m sure many (if not most) of my readers have recognized a surge in the demand for schedulers who are adept at performing – not the creation and maintenance of Critical Path networks, but forensic analyses of projects that are either nearing completion or are complete, and some sort of claim/counterclaim is going on. This gets back to the narrative-building aspect of these information systems, and needlessly-detailed baselines are a gold mine to plaintiffs in these challenges. Any deviation from the original plan, no matter how innocent or appropriate, can be mis-interpreted as a quasi-breach of contract, essentially leading to the excusing of poor performance from subcontractors due to the collection and placement of irrelevant data in the project’s baselines.
But, hey! At least those who insist that quality baselines are synonymous with extremely detailed baselines are happy!
I can hear my critics now – “Does Michael always have to play the contrarian?” Well, not always … only around 68.3% of the time. But it’s my perception that around 99.7% of the articles and blogs on the topic of quality council the same thing: you gotta do it if you’re not, and, if you are, you need to do more of it (quality analysts: We see what you’re doing there! Those percentages are one and two standard deviations from the mean on a nominal curve! You’re not that clever, Hatfield!). With this prevailing narrative so entrenched, it virtually invites charlatanism masquerading as legitimate management insight. And, just for the record, management science charlatanism doesn’t need much of an invitation to take root and spawn multiple business pathologies.
Take one of the more common tools used by the quality managers, the Causality Analysis. If your organization is producing sub-standard products or services, the first, most obvious question is “Why?” The typical Six Sigma analyst will begin here, by taking the recognized fault and tracing it back to its origins. They do this by interviewing the members of the project team who were involved in producing the sub-standard output in order to construct the narrative of the products’/services’ journey from its beginning to the end-user, who found it unacceptable. Common sense, right?
Here’s my problem with that: outside of the quality control meme, few people have a clear grasp of the difference between proximate cause and material cause, rendering their proffered narratives on what happened during the delivery of the substandard product/service entirely subject to their own, biased perceptions. Without a clear philosophical understanding of this and other differences, it’s not long before causality links are being created, connecting events that have nothing more to do with each other than having occurred sequentially, leading to things like sports fans wearing specific clothing on game day, or the observance (or even celebration) of Groundhog Day.
There’s also the issue of what represents an appropriate level of quality. A Rolex® Submariner sells for between $7000 and $10000 (USD), and is accurate to within 6 seconds per day. There are $30 (USD) digital watches that would need years to be off by 6 seconds. Obviously, people are paying for more than the ability to tell time accurately when they purchase the Submariner – but to the tune of 99.7% of the watch’s price? (Quality analysts: stop with the precise alignment with the standard deviations, already!) And this is my point – should outside Six Sigma “black belts” be brought in to Rolex’s plant in order to search out a problem, my guess is that the first investigation on their agenda would be to discover ways that their watches could be more accurate, since that is the nominal job of a timepiece, after all.
Such an analysis would clearly miss the point. However, should a certain blogger with iconoclastic tendencies who writes for ProjectManagement.com actually assert that (a) sometimes what the quality gurus are bringing to the table isn’t appropriate to the product or service, and (b) even when standard QC techniques are called for, it’s easy for all sorts of biases to enter in to the narrative of causality, then the accusations of being “against quality” become fairly easy.
Look, I’m not “against” quality per se. I want my devices functioning properly for long periods of time, and fully expect the manufacturers of those devices to perform whatever quality control or quality management they need to do in order to fulfill that expectation – or else I’m taking my business elsewhere. But that’s my final point: the free marketplace will signal your company or project team when its quality of output has become a problem, not the quality gurus who are ever on about how we should be observing ISO 9000 or else we’re poor managers, yada yada yada.
Now, if my readers will excuse me, I’ve got to re-read this blog 99.7 times to make sure I haven’t made any syntax errors…
Standing right behind the risk managers on the list of management specialties that would love to slap me upside the head with a two-by-four are the quality guys, and for similar reasons. I believe each discipline asserts the efficacy of their techniques well beyond their true effectiveness, often pushing all the way to irrelevancy. Whereas I’ve characterized what the risk analysts do as little more than institutional-wide worrying, I’m going to call what the quality experts do when they go too far as eat-your-peas-style hectoring.
Don’t get me wrong – when your organization has a genuine problem with the level of quality being put out in goods and services, management simply has to call in a quality specialist, even if their predilection for assigning rank among themselves is irksomely similar to the way martial artists do. They’ll create their process maps and fishbone diagrams, failure mode assessments and Total Quality Management award applications, and eventually come up with a usable answer to how to improve the companies’ goods and services. It’s what happens in the meantime, and afterwards, that has me irritated.
Consider the story of Christian Frederick Martin. Martin was a guitar maker in Germany in 1833, when the Defenders of Quality were the guilds. To advance in any industry dominated by these 19th-century TQM enforcers, one had to serve time as an apprentice before even being considered a candidate to move up in the ranks, to an acknowledged craftsman, or master of the trade. Martin and his family belonged to the (gasp!) cabinet-makers guild, and were opposed in their guitar-making by the local violin-makers’ guild. Of course, freed from the fetters of the guilds, your nominal 19th-century Elvis Presley would have had the consumer’s option of deciding if he wanted to gyrate around the stages of Europe with a guitar slung down low that had been manufactured by a cabinet-maker, or a violin-maker, and that would be that. But, nooooooo….
In 1833, the cabinet-makers issued a statement, which read, in part,
"The violin makers belong to a class of musical instrument makers and therefore to the class of artists whose work not only shows finish, but gives evidence of a certain understanding of cultured taste. The cabinet makers, by contrast, are nothing more than mechanics whose products consist of all kinds of articles known as furniture." Slandering the work of the cabinet makers, the Violin Guild added: "Who is so stupid that he cannot see at a glance that an armchair or a stool is no guitar and such an article appearing among our instruments must look like Saul among the prophets."[i]
Realizing that he would never be completely free to manufacture guitars the way he wanted, Martin left Germany and came to the United States, where he founded a guitar company whose name today is virtually synonymous with exceptional quality. And, when the real Elvis Presley burst on to the scene in the 20th century, he had one of Christian Frederick Martin’s company’s guitars slung low as he gyrated across the stages of the American Midwest.
But note how the guild went after the non-violin guild’s guitar-makers – it was purely on quality grounds. The violin guild had set themselves up as some sort of quality police force, bravely saving the people of Germany from having to endure bad guitar music (if only they could have been around for the onset of “grunge rock”). In the long run, it was the free market that drove the quality of the guitars upwards, not the harrumphing of the violin guild.
For those quality managers who are not impressed by this blog, and still wish to slap me upside the head with a two-by-four, I have this question: are you aware that two-by-fours are, in fact, one and one-half inches by three and one-half?
[i] Retrieved from http://www.martinguitar.com/about-martin/the-martin-story.html?id=170, 14:10 p.m. MST on 31 January, 2015.
As I alluded to in last week’s blog, the formal project review is perhaps the most important thing that the program manager does. Here, each of the projects in the program have a representative discuss the projects’ performance, concerns and issues. Members of ProjectManagement.com have access to a variety of templates that help with the basics of project management, ranging from risk management checklists to baseline change proposal forms, and these forms are very useful. I don’t recall seeing a template for aiding the program manager with conducting these project reviews, so I thought I’d take a stab at it. Who knows? If Cameron likes this form, he may invite me to submit others! Just for the record, though, neither Cameron nor anybody else from ProjectManagement.com has invited me to roll out the following project Review Agenda Template (RAT) – this is entirely on my own.
Back when I received my PMP® (and it wasn’t recent – my PMP® number is 1004) the PMBOK Guide® had the following major chapters:
· Human Resources
· Acquisition / Procurement
I have long contended that some of these areas are more important to project (and, by extension, program) management than others, but these chapter headings did provide me with the foundation for my proffered template. If we assume that a given major program has twelve projects within its purview, and that the program manager doesn’t want to drag the reviews out for more than ½ day, then we’ll structure the RAT so that each project’s representative has 15 minutes to get everybody up-to-date. I’ve adjusted the agenda headings to reflect my take on the value of the PMBOK Guide® chapters. So, with no further ado, here’s my RAT.
Review Agenda Template (RAT)
Project Reviews for Program _____________ Held on (date, time) at (place)________
Each project will provide updates for the following PM areas:
Scope: (30 seconds. I mean, c’mon, if the program manager doesn’t even know what your project is about, you could probably ditch this meeting altogether.)
Cost: (5 minutes [less, if you’re verifiably on-time, on-budget]. It’s appropriate that this subject and the next one take up the lion’s share of the review. All anybody really wants to know is, if you’re having a problem, how much it will cost to fix it.)
Schedule: (5 minutes [again, less if you are on a pace to finish early]. If you just got done reporting a Schedule Performance Index [SPI] of less than 1.00, and your critical path schedule is indicating an on-time finish, you’ve constrained too many milestones.)
Risk: (12 seconds, just long enough to have the project rep blurt out “After the baseline is approved, risk management is largely irrelevant.” Besides, a risk analysis of this review indicates that you’ll need to set aside 3 minutes as Contingency.)
Quality: (1 minute firm. Over-wrought statistical investigation does not a valid quantitative analysis make. Besides, this is more of a personnel/process issue than specific-project related.)
Communications: (18 seconds, unless you really want another lecture on the alleged importance of “involving stakeholders.”)
Human Resources, Procurement: Whoops! Look at that – we’re out of time! We’ll let you guys report on your goings-on at the next review.
Feel free to reproduce this form for your project reviews – just don’t attach my name to it. I’m busy enough ducking risk, procurement, communications, quality, and procurement managers as-is.