My regular readers are probably aware of the extent I despise the tactic of introducing or advancing a theory or concept in a fictionalized setting, the so-called “business fable.” Think about it – how can we ever even refer to “management science” if any of its currently-adopted precepts come to us as stories, rather than legitimate experimentation or scholarship? Which famous scientists of old used this tactic? Imagine if Einstein had introduced the General Theory of Relativity with talking animals or other invented characters. If he had, wouldn’t that automatically render his scholarship extremely dubious?
Not so with “management science,” no siree! Eliyahu Goldratt introduced the concept of Critical Chain in a novel of the same name. An instructor who’s also a new project manager introduces a (really, an old) twist to critical path analysis, tries it out on a difficult project, and – wouldn’t you just know it? – it works like a charm! B.F. Skinner did something similar in the field of psychology, when he published Walden Two. In this book, a commune of sorts is set up, the governance of which is based on the precepts of what would later become known as Behaviorism, and a reporter goes to see what’s going on at this place. All of the inhabitants are perfectly happy, with no crime or deviant behavior of any kind present, all thanks to the Behaviorism-based approach of the commune’s managers and directors. Beyond Freedom and Dignity, the book that actually laid out Behaviorism in a scholarly fashion,wouldn’t be published for another 23 years. Skinner may or may not have had real people or patients who benefitted from his theories, but he had hundreds of imagined ones who did!
Which brings us to Who Moved My Cheese? by Dr. Spencer Johnson (who, interestingly enough, has a degree in psychology). Who Moved My Cheese? is set in a maze, with two mice and two miniature people seeking out supplies of the only foodstuffs referenced, cheese. Ironically, mice running around in mazes provide a substantial part of the hard data Behaviorists use to support their hypotheses and theories, but Dr. Johnson’s maze is purely imagined. As the available supplies of cheese move about the maze, the two miniature humans discuss how they will respond to their changing environment, which provides many opportunities for fortune-cookie-style managerial axioms to be introduced (usually by writing them on the walls of the maze) and evaluated, again in a purely literary setting. Actually, can any hypotheses really be said to have been “evaluated” in an unreal setting?
Compare this approach to advancing managerial concepts to the one used by the genuinely brilliant Nassim Taleb in The Black Swan, The Impact of the Highly Improbable. Actually, don’t bother – there is no comparison. In fact, I would advise against reading these two books back-to-back, similar to the reason why one should never consume a ColdEze lozenge and then drink a diet Pepsi. However, I can’t help but to conflate these two works. Since Who Moved My Cheese is fiction, it cannot be translated into non-fiction; however, The Black Swan, being non-fiction, can be pushed into the other genre, so:
Spahk and Sarak are two miniature Vulcans who find themselves in an alien maze, co-occupied with two gophers, Spiff and Spurry. Using the mild-meld technique, Spahk learns from the gophers that caches of roots, bulbs, and grasses are deposited at certain locations within the maze, and exhorts Sarak to seek their necessary sustenance at those places.
“Wouldn’t it be easier to kill the gophers?” Sarak asks.
“Illogical. Vulcans are vegetarians. There is no reason to kill the gophers.”
“Except that they are competitors for the existing food supply. That, taken with the fact that gophers are considered pests and are rodents, should be enough reason.”
“What if the aliens who put us here maintain an emotional attachment to the gophers? If our treatment of them is part of the reason we are here – some sort of test, or experiment seems likely – then any hostile act may harm our chances of getting back to where we’re supposed to be.”
Sarak thinks about this briefly.
“Where, exactly, are we supposed to be?” he asks.
“I’m supposed to be on board the Starship Enterprise, and you are supposed to be at the Federation’s Diplomatic Compound on Earth.”
“But those places don’t exist outside of the science fiction genre. In fact, this, the miniature version of ourselves, do not exist outside of a blog on ProjectManagement.com. And, based on what I know of this particular blog’s author, we could fade away at any moment.”
“But wait!” Spahk exclaims. “I haven’t even passed along some jejune truism that really only has any applicability among human resource managers, or motivational speakers!”
Suddenly, two giant alien researchers appear above the maze, wearing white lab coats and holding clipboards. They begin arguing intensely, apparently about something to do with their marriage. One of them produces a weapon, activates it, and vaporizes the entire facility.
Soooo… I have to ask: if the preceding struck you as pretty silly, why, exactly? Because I used miniature Vulcans instead of humans, or gophers rather than mice? Was it the use of roots, bulbs, and grasses over cheese? Is it these stylistic differences that makes the whole narrative inherently silly – or is it not the whole business fable genre in and of itself? Better-written fiction is still fiction, after all.
When I saw December’s theme I was instantly reminded of my Management Information Systems professor from graduate business school. One of the exercises he passed out to the groups in the class had to do with scheduling a series of activities – about twenty-four of them – into a network. The activities’ durations and logic were included, so it was just a matter of placing them into a structure, conducting a forward pass and a backward pass, finding the critical path and calculating float. Since I was, at the time, a project controls analyst, this exercise was right up my alley (or so I thought). My boss at the time even got involved, and loaded the exercise’s parameters into a CPM software package to confirm my manually-derived answers.
On the day that I was to present my group’s results to the rest of the class, I was very confident, and even more so as the first group to present findings had the exact same answers that I had. But, as this first group was wrapping up, the MIS professor pointed out that the “document software” activity had been scheduled to be completed prior to the testing of the software itself (the project being scheduled was to develop a computer program), and was, therefore, mis-scheduled. That team’s presenter argued in vain that the problem’s stated parameters had been correctly integrated into their solution – this professor insisted that the team should have recognized the invalid parameter. Then it was my turn.
In presenting the set of answers that I knew this instructor was going to find unacceptable, I had but one, desperate ploy: I maintained that the CASE tool our project team was using included a self-documentation feature, meaning that the activity to develop the documentation was properly placed prior to the testing activity. He didn’t buy it, and I ended up receiving one of my very few Bs in graduate school (for those of you who didn’t attend graduate school, an A is an A, a B is really the equivalent of a C, and a C is pretty much an F).
Flash forward two years. I had achieved my Project Management Professional (PMP®) certification from PMI® at a time when that certification was a real bear to acquire. It involved taking eight 50-minute exams, with an hour for lunch, making it an all-day affair. You had to pass at least six of the exams to qualify to re-take the two you flunked, and around half of the test-takers failed to do even that. After receiving my PMP®, I became more involved in my local PMI® chapter, and arranged to proctor the next PMP® certification exam session. It was to be held on a Saturday, and the Wednesday prior I received the exams, blank answer sheets, instructions, and list of test-takers. These last two were of particular interest: my instructions were very clear that absolutely no student was to be admitted into the examination room late, and the list of test takers included my old MIS professor.
Sure enough, on Saturday everyone on the list except for my old professor was on-site, and in my employer’s conference room at 7:55 a.m. At precisely 8:00 I read them their instructions, informed them that the clock was running, and went upstairs to lock the front door. I locked the door and headed back to the conference room. Just as I was re-entering the conference room, I heard agitated knocking at the front door. I looked at my watch: 8:05 a.m. I returned to the front door, which had clear glass panels on it, and there he was. I don’t know if he recognized me, but I, of course, recognized him. And I had a dilemma on my hands.
If I let him in, it would be in clear violation of my instructions from PMI®. I might also be adding a distraction to the people who were already in the room, pencils out, answering questions. And, if this guy could not find such an easy address as my employer’s at a time certain for something that was supposedly this important, did he deserve to be a PMP®? On the other hand, if I simply left him on the outside with little more than a gesture of pointing at my watch and mouthing the words “sorry, you’re too late,” could I feel confident going forward that I had not done so out of a desire for vengeance over his sloppily-evaluated class problem? I pretty much knew how he would have responded had our situations been reversed, when the appropriate evaluation question popped into my head: How would I want or expect him to respond if I were the one on the other side of that door?
I turned the deadbolt and let him in. He was effusive with his thanks, pouring out one lame excuse after the other for being late as he hurried into the exam room. I have no idea if he passed the minimum six, or if he attained his PMP®, much less did anything with it. In fact, I don’t even know if my true motivations here were philanthropic, or if I just wimped out on an opportunity to inflict a dose of well-deserved comeuppance.
But I do know this: looking back, I’m glad I made the decision I made.
In an article from Readers’ Digest Treasury for Young Readers, you are shown how to construct an Hexapawn robot. Hexapawn is a game played on a nine-square board with, as one might expect, six chess pawns. The pawns move as they do in chess, and start on rows 1 and 3. The object of the game is to advance a pawn to the last row, capture all of your opponent’s pawns, or else put him in a position where he cannot move. The robot part of it has to do with twenty-four matchboxes, some maps, and colored beads. Little maps of every possible position are drawn up and placed on the tops of the matchbooks. Colored arrows indicate each possible move from that position, and corresponding colored beads are placed in the matchboxes. You then “teach” the robot to play by playing game after game of Hexapawn, and removing the colored bead from the appropriate matchbox that corresponds to the last move of all losing games. After about eleven or so games, the robot becomes perfect, and cannot be beat.
Before I go on to challenge outright the tons (literally) of research and writing that have gone into modern quantitative analysis in business, I want to discuss another game: the Ultimatum Game. This game has the game manager approach two subjects, and makes the following offer: to give them $100 (USD) on the condition that Subject B agrees to the first plan that Subject A articulates to split the money. If Subject B does not agree to Subject A’s plan, then neither person receives anything.
Game theorists attempting to determine Subject A’s best strategy for maximizing their payout calculated that the best proffered plan would be for A to receive $99, and B to receive $1, on the theory that B would rather receive $1 than nothing at all. But a funny thing happened to Subject A as he was preparing to deposit his $99: that plan was almost always rejected in actual experiments of the Ultimatum Game. There were actually instances where a 50/50 split, or even splits where Subject B received more than Subject A, were rejected. After having reviewed the data from the experiments, game theorists tended to chalk up the dramatic differences between their theoretical expectations and real-world results as owing to “cultural” factors, or else Subjects B not acting in a rational manner. Nothing could be further from the truth.
Consider the calculated/expected outcome’s implications. If a stranger approached you and, say, a friend with whom you just happened to be walking down a sidewalk, and presented the Ultimatum Game’s rules, and your friend offered up the 99 – to – 1 split, does that not imply that your friend was 99 times more worthy of unearned largesse than you? And – the value of a single dollar bill being what it is – wouldn’t it be worth it to forgo the $1 in order to reject the implication? We haven’t even touched on Subject B’s willingness to punish Subject A for being greedy, or arrogant, or dozens of other reasons why the experimental data was so at odds with the theoretical projections.
Which brings us to the problems with quantitative analysis in business as it is currently taught in the nation’s universities. The free marketplace is an extremely complex environment (it may even qualify as chaotic – there’s really no way of knowing). And yet, the most basic analysis tactics put forth in the current literature treat it as if it’s relatively simple, and can be captured mathematically. For example, the decision on whether or not you should close your business when it is losing money is supposed to be predicated on whether or not your revenues exceed your fixed costs, rather than just your total costs. Umm …yeah, what if next week you are to learn of the award of a contract that you bid, where you estimate a 50% chance of winning, and that work would put you back into the black, and in a big way? Or of three such proposals? Of course, the kind of information that your general ledger can offer up can’t possibly capture that, and is, really, comically incapable of making available the definitive quantitative analysis that would support that decision, one way or the other. The asset managers are simply turning to their version of the Hexapawn robot, and retrieving the colored bead that tells them what to do, not realizing that the game they are playing is no where near confined to a nine-square board. And, when their so-called quantitative analysis is proven wrong, they can simply deflect blame on to cultural factors, or players acting irrationally. Hey, guys – it’s the free marketplace! Nobody acts in a way that you can predict, or calculate – in other words, the world is, by your definition, irrational, and will always be that way.
Must I say it? The notion that the general ledger can possibly inform the decision of whether or not to stay in business is pseudo-intellectualism of Cecil B. DeMill proportions (I’m actually hoping that Cameron uses this quote in his teaser on the web site). And that is business intelligence’s fatal flaw – the arrogant premises from which the quants proceed.
There is a legend that, in 1815, in the immediate aftermath of the Battle of Waterloo, Nathan Rothschild had learned of its outcome via carrier pigeon. But it’s what he did with that intelligence that fascinates: the morning after the battle, Rothschild attended to his usual station at the London Stock Exchange, looking anxious. He immediately began selling off his English holdings, assets that were sound and provided returns only as long as the government that backed them was solid. Instantly the rumor spread, “Wellington lost! Rothschild knows!”, and a sell-off commenced. Once these futures had plummeted in price, Rothschild had his agents quickly buy them back, at dramatically reduced rates. He made a killing that day.
Again, note his tactic – he didn’t use his advanced intelligence to buy up English futures, which were sure to increase in value once the Brits knew that Napoleon was no longer a threat. Instead, he behaved as if the English futures were worthless, inspiring panic in those who were watching him for advanced intel. It was their reaction which led to Rothschild’s ability to make far, far more than had he acted on his intelligence directly.
I said that this is a legend, and more than a few historians have rejected the story in its entirety. But it does go to illustrate not only advanced use of business intelligence, but out-and-out manipulation of the same. Rothschild broke no laws, and would not have even if he pulled this stunt in today’s far more stringent business law environment.
Such may or may not have been the case with Enron’s behavior during the California Energy Crisis of 2000 – 2001. As I discussed in my must-have book, Enron’s advanced use of business intelligence, coupled with their ability to manipulate the same, led to eye-popping profits in a relatively short period of time. Much of the analysis of the crisis places the blame for the difficulties solely on the malfeasance of Enron, but I tend to disagree with those analysts. I believe the clumsy attempts from the California legislature to pass laws to control the way the power companies operated within the state provided the perfect environment for organizations that knew how to handle business intelligence, as well as manipulate it, to thrive. The laws were byzantine in nature, but Enron quickly found formulaic approaches to maximize their profits within that regulatory environment. These formulaic approaches, or “manipulation strategies” as the Californians liked to call them, were essentially cartage schemes that could be invoked almost instantaneously whenever a certain set of parameters – current load, power line availability, prices on the out-of-state market – manifest. And then, well, it was just a matter of watching the profit meter spin at dizzying speeds. Yes, Enron did play fast and loose with the untimely shut-down of plants for “maintenance,” or the priority renting/leasing of major power lines. But I still blame Sacramento for the fiasco – they set up a business intelligence game, and simply couldn’t play at the same level as their private sector counterparts.
The common thread in these two stories is that a certain level of deception was involved. I would venture to say that in virtually every instance of an advantage in business intelligence leading to dramatic managerial success, the element of deception is present. Now, I understand that that makes many people uncomfortable, but such ones need to understand that the business intelligence component of management science more resembles the game of poker than it does chess. In chess, all of the pieces are visible to both players, as well as their possible moves. There is no deception in chess. Ah, but in poker, there’s deception all over the place. The poker player who bids high on good hands and low (or even folds) on all bad hands will not win very much money at the game. There simply have to be times where he bids up losing hands (“bluffing”), or bids down winning hands for the express purpose of preventing his opponents from realizing and then acting on a known pattern of bidding behavior, and taking advantage of it.
Science, on the other hand, can brook no deceit. By definition, science seeks to test hypotheses in an empirical setting in order to discover truth. In those instances where any form of deception does enter in to the evaluation of a theory, it renders the science fraudulent, as in the behavior of University of East Anglia Climactic Research Unit in its support of the anthropomorphic global warming hypothesis.
Since the pursuit of advanced business intelligence contains an element of deceit, it more closely resembles a game than it does management science, and should be approached on that basis. If only someone out there had written a book that deals with game theory in management – oh, wait, I did.
This story appears in a beloved book I read as a child, Reader’s Digest Treasury for Young Readers:
Yet even Dr. Bell (the person after whom Arthur Conan Doyle modeled Sherlock Holmes) sometimes made mistakes. Luckily, he also had a sense of humor when people asked him to give examples of his skill as a detective, he liked to tell this story: One day he and his pupils were examining a patient in a hospital bed. “Aren’t you a musician?” Dr. Bell asked him.
“Aye,” admitted the sick man.
“You see, gentlemen, it is quite simple. This man has a disease of the cheek muscles, from too much blowing on wind instruments. We need only ask him, and he will admit it. What musical instrument do you play, my man?”
The man got up on his elbows. “The big drum, doctor!”
I like to remember this story whenever I explore the epistemology of management information streams, since it is so easy to equate perceived project success with factors that may have been entirely incidental to the remembered project’s actual success. I once was working to set up the cost and schedule performance systems on a major project that was headed by a manager who insisted that my team provide a report he termed a “swim lane chart.” What he actually wanted was a PERT chart, sorted by performing organization. I could see the utility – the various performing teams were in a column to the left, and the activity boxes that appeared to their right represented the scope for which they were responsible.
“Okay” I offered, “we can do it, but we’ll need to start with a Work Breakdown Structure. Then, we’ll need the Organizational Breakdown Structure, so that we can cross-reference them into a Responsibility/Accountability Matrix, or RAM. Once we have that, we can load the information into the critical path software, and generate your report.”
“I don’t want to do any of that stuff. I just want a swim lane chart.”
“I know you want the chart, but we can’t get there without the RAM.”
So he had me removed from the project.
This guy just knew that his so-called swim lane chart was the key to managing the project to a successful outcome, and wasn’t going to listen to any of that project controls nonsense about how to get there.
I’m sure my readers have many similar stories. Some manager or exec has a particular project management artifact that serves as their security blanket, and will brook no challenge to its efficacy. Their attachments to these talismans can reach a zeal rarely seen outside of religious institutions or sports bars. And, when these superfluous information streams become institutionalized, the amount of managerial folly they generate can become extraordinary. Just look at the number of U.S. Government agencies who insist that projects have a risk management system.
It doesn’t stop with the risk management crowd, either – they’re just the more irksome of the bunch. Virtually every attempt at quantitative analysis in business or management requires the use of some subjective variable, variables that really cannot be known to within the boundary triggers for making decisions. For example, in comparing the Return on Investment (ROI) among competing prospective projects, the anticipated rate of return can only rarely be estimated to within double-digit accuracies. However, the decision on which projects to pursue are almost always made by single-digit margins. It’s as if the organization is making a decision to select a subcontractor based on the estimation of who has the fewer Capricorns on staff, and attacks anyone who doesn’t acknowledge that as an appropriate basis for making the decision as hopelessly ignorant.
Of course, there are ways of objectively determining the validity (or lack thereof) of the competing management information streams, but how to do so would take an entire book, which, fortunately, I happened to write.