In last week’s blog, I reviewed an organization where I was employed that was positively obsessed with the concept of project sponsorship, to the exclusion of developing its personnel or actually executing the work it had in-hand to the best of its ability. Instead, this organization doomed itself by focusing its primary resources on bringing in new work, with all else receiving short-shrift. The organizational pathologies that developed because of this skewed focus were fascinating to watch unfold, even though they caused the staff much unneeded pain and anguish.
My next serious gig was with a company that had the exact opposite issue: it really didn’t care much about project sponsorship, and that perspective gave rise to an entirely different set of organizational pathologies which were similarly fascinating to watch unfold. This phase of my employment was with a company that claimed status as a minority-owned, disadvantaged firm under Paragraph 8, subsection A of the U.S. Small Business Administration rule book, which pretty much required a certain percentage of the government’s procurement budget be sent to companies so classified. Rather than have to spend considerable resources anticipating requests for proposal (RFPs), bidding on them, and breathlessly awaiting the results of the procurement cycle, this company’s contracts were often rewarded with a minimum amount of fanfare or strife. Life at this end of the spectrum was a trip.
It was, as stated, a small company, and its owners tended to treat the employees like family. This was a very strange experience for me, having come from a beltway bandit corporate giant that only gave lip service (if that) to its perception of the value of any individual employee. These owners were really neat people, and the staff had a far, far more relaxed attitude as they performed their projects’ scope. At the beltway bandit, virtually every member of the staff was constantly aware, if not positively petrified, of the possibility of imminent layoff. At this 8a firm, virtually every member of the staff hardly gave that possibility a second thought. Though, at first, I was somewhat skeptical about the 8a’s ability to perform at an optimal level, I soon began to realize that even I could perform much better when I wasn’t spending so much time looking over my shoulder, fearing a manager sneaking up behind me with a pink slip in-hand.
Eventually, however, signs of trouble in paradise began to become apparent. My new company/family was particularly poor at projecting revenue, since that particular information stream is definitely outside the capabilities of either the general ledger, or even a project management information system for that work that was not already in-house and baselined. With the strategic management function running on auto-pilot, there was no real push to develop the information systems that could have enabled informed decision-making in that arena – meaning that the project sponsorship function was essentially atrophying away.
As the date for “graduating” from the 8a program approached, and the project work would cease simply walking in the front door, efforts were undertaken to get a handle on the project sponsorship capability. These efforts resulted in what I nicknamed as the “Messiah Complex,” where new vice presidents would be hired from outside the company in the (vain) hope that they would bring new business in. These new veeps would be given charge over significant chunks of the company, and set loose. When the new project work was late (or never) in coming, another new veep would be brought in .. but the old ones would not be released. It wasn’t long before the organization became impossibly top-heavy, even as the new project work remained elusive.
This company attempted an IPO, but was hopelessly in debt by the time the lawyers and accountants were ready to pull the trigger. It was purchased, like the beltway bandit, by a competitor, and, like the beltway bandit, was a shell of its former self.
So, given these two extreme cases, is there a method for identifying the appropriate level of management attention to the project sponsorship role? Absolutely! The best approach is…
...look at that! Out of space again. I’ll have to take this up next week…
To send a little sunshine Cameron’s way, the selection of the ProjectManagement.com theme for September, Project Sponsorship, was brilliant. Almost all of what’s written about project management begins when the contract is in the door. But a ton of that management stuff goes on before that blessed event, and it’s high time we insightful bloggers turned our gaze upon it.
A little history – my first gig out of college was working for a beltway bandit, assigned as a technician at an Air Force Weapons Laboratory radiation effects test site (I assure you, the title is far sexier than the actual work was). This organization (the beltway bandit, not the lab) was all about project sponsorship, and to excess. In fact, its over-the-top focus on project sponsorship led to its eventual downfall.
I first got a taste of this skewed perspective when I was making the transition from hourly-pay technician to staff member. All new staff members were required to go through a ten-hour training session, over three evenings, after normal working hours and on our own time. I recall the high-point of this training, where we were presented with a case study involving two managers, nicknamed “The Craftsman” and “The Wizard.”
The Craftsman’s story was this: he had been the senior engineer on a high-dollar, high-profile Department of Defense project, which was coming to a close. The project was coming in on-time, and under-budget, with all of its technical goals achieved in satisfactory fashion. The customer had written glowing letters of commendation to the project team. Everything seemed to be going swimmingly, except for one little detail: the project engineer had failed to procure any follow-on work, not with the program, and not with the customer. For this reason, The Craftsman was being held up as an abject failure.
Then came the story of The Wizard (I swear I am not making up these names – they’re actually the ones this company used in the training). The Wizard was also a senior engineer on a project, but this project was in trouble. It was late on several key milestones, and it was running a sub-par Cost Performance Index for some time. Everything appeared to be going hideously, except for one little detail: this project engineer had already procured several follow-on task orders with the program.
The instructors were making a brave attempt at depicting each scenario as equally troublesome, but the truth would come out when one of them was pressed to compare the two. “I can tell you” the instructor stated ruefully, “we have a lot more Craftsmen than Wizards.”
And, indeed, upon becoming a staff member, it was quite obvious that the organization’s main focus was on attracting new business, with performing at a high level on the existing contract backlog coming in at a distant second place (if that).
Here was the basic problem with this approach: since the extra effort at marketing for new business had to take place after the nominal 40-hour work week, and since the staff was forbidden from charging for it, all of this extra effort was free to the organization, but burdensome (to say the least) to the staff. The unstated, but very real hierarchy in-place involved essentially spending 50, 60, 70 or more hours per week at work in order to avoid the layoff list. Predictably enough, while the proposal backlog expanded, the performance on the existing work fell off. The truly talented members of the staff could readily obtain employment under better terms elsewhere, while the mediocre (or poor) employees were left behind, spending more and more of their evaporating free time in an attempt to prevent the inevitable.
The company, as big and successful as it had been early-on, entered into an elongated decline cycle, and was sold to one of its rivals who, presumably, had a more realistic perspective on the relationship between project sponsorship and project management. The company went under, and the direct cause was its “success” in the project sponsorship arena.
Next Up: the other side of the spectrum. I will discuss my next serious gig, where project sponsorship was neglected, and the entirely predictable outcome.
I haven’t been to any major project management conferences for a while. It just seems that the vast majority of the papers presented fall into one of three general categories:
· There are the traditionalists, whose presentations cover the basics of Earned Value or Critical Path Methodologies, but do so in such a way as to pretend that these techniques haven’t been around for generations. Their content is so chock-full of eat-your-peas-style hectoring that participants should receive double PDUs for simply enduring them.
· Then there’s the Gaussian Curve crowd, mostly pushing some marginally-supported risk management theory, but sometimes schedulers who want to perform some inchoate statistical analysis on the amount of free float in certain phases of a project, blah blah blah, as if injecting massive levels of data elements into their analysis somehow validates their underlying hypotheses (hint: it doesn’t).
· Finally, there are those whom I wish to review in this post: the participants in a successful project, who wish to relay to the hoi polloi the reasons why their project was so amazing.
The first mistake these people make is to try and conflate the project’s technical success with its managerial success. The two are not necessarily synonymous. The Sydney Opera House may be a beautiful, one-of-a-kind, instantly recognizable landmark, but its project management was a train wreck. While there’s something about gee-whiz projects that seem to impart to all of their participants the aura of success, the opposite is also true: the Titanic’s launch was actually on-time (although her fitting out was delayed due to late changes in design), though I doubt many White Star Line employees were eager to point out their affiliation with her after 1912.
So, back to the convention centers’ and hotels’ conference rooms. Whenever you see the words “lessons learned” in a given presentation’s title or synopsis, and the project is not a known PM disaster, you’re being sold a bill of goods. Oh, I’m not saying the project being showcased wasn’t cool, or didn’t come in on-time, on-budget. I’m merely suggesting that the presenters will invariably have, shall we say, an interesting way of connecting the weighted milestone dots in such a way as to reflect on their virtue, or skill, or, most perniciously, an adaptation of a relatively unknown aspect of project management that made all the difference in the world, don’t you know.
For example, had ProjectManagement.com been around in 1939, and had held a convention (“What does ‘dot come’ mean?” “Beats me.”) the following could have been on the seminar’s docket:
Wednesday, 9:00, in the King George VI conference room, Project Analyst Naughta Bitatruth will discuss the successes of Nazi Germany’s zeppelin program as a function of superior project team communications. He will also discuss the negative perceptions being attached to the program from the recent Hindenburg incident, including:
· The advantages of disposing of the hydrogen gas in such a way that it does not re-enter the atmosphere;
· The additions to the pop culture that have come about due to the zeppelin program, including the introduction of the term “Oh, the humanity!”
Thursday, 12:00 noon, in the Rockefeller conference room, French Executive Advisor Compe’ Letnonsense will present the analysis that shows how superior risk management led to the on-time, on-budget completion of the Panama Canal. Compe’ will cover:
· Use of the single-tier decision-tree analysis method to virtually eliminate the yellow fever and malaria threats to the project.
· How, since a Monte Carlo analysis showed that it was a distinct possibility, the original efforts, costing $287,000,000, bankrupting the company and seeing the original PM sentenced to jail, should be viewed as a risk management success!
You’re familiar with the expression, Success has many fathers, while failure is an orphan. Well, success also has many optics.
But only a few of them are clear.
I find it fascinating how often one of the most basic distinctions in project management information systems is overlooked, or, even worse, blurred into near-meaninglessness. I’m talking about the difference between feedback and feed-forward management information systems (MISs).
For those of you who are (understandably) coming to this topic for the first time, feedback systems report on what has occurred, based on verifiable information. The general ledger is wholly predicated on the feedback concept, as are several other systems. Advantages to feedback systems include:
· The data is (or ought to be) accurate, with little or no subjectivity present.
· …and, well, that’s about it.
Here are the drawbacks to feedback systems:
· Depending on how long it takes to gather the data, process it into information, and get it to the decision-makers, the information may become so dated as to become marginally useful.
· There’s also the problem of the point of view of those interpreting the data. Dots connected by one manager into some sort of causality loop may strike another as purely coincidental.
Then there’s the feed-forward system. Such systems seek to anticipate the future, such as the use of polls preceding an election. The advantages of such systems include:
· Obviously, a management information system that can accurately predict the future is pure gold,
· …leading to its desirability.
These very advantages lead directly to the feed-forward systems’ drawbacks, including:
· Most such systems depend on subjective data to such an extent that they are little better than reading tea leaves, or other forms of divination.
· But, since they are so desirable, promoters of unproven techniques are drawn to design them in droves.
Of course, the very nature of experience (and the function of the hippocampus, if you want to get technical) is to assess facts and past events, organize them into some sort of structure, and flip that structure forward, across the Time Now line in order to derive a useful anticipation of how the future will unfold. While this little thought exercise may be part and parcel of carrying on our lives, it tends to fail as a repeatable, teachable tactic in management science. The future simply cannot be quantified, which is why the best managers are the ones who can think on the balls of their feet, adapting to new and unforeseen circumstances with (mostly) appropriate responses. PMs who stick to their plans, even in the face of dramatically changed circumstances, are usually the ones behind the projects that end in failure, if not disaster.
Which brings us back to the types of information systems that PMs like to use to learn about what’s going on in their projects. Both earned value and critical path methodologies have a remarkable capacity for predicting future performance, based on the reliable aspects of feedback systems. EVM and CPM, when set up correctly, are capable of consistently predicting final project costs and durations within ten points, an accuracy rate that can only be dreamed of by purveyors of the general ledger, or risk analysis tools, for that matter. Since they are predicated on observed, quantified past performance, such systems have the advantages of both the feedback and feed-forward types, without their drawbacks, which is probably the main reason why accountants and risk managers tend to have a natural resentment of project controls analyst. I could possibly contribute to the lessening of this resentment, if I would just stop making fun of accountants and risk managers in this blog
I’m totally cool with this month’s ProjectManagement.com theme, that of visual project management. I’m a firm believer that, once the project controls staff has collected the relevant data, processed it into usable information via Earned Value and Critical Path methodologies, they need to deliver that information in not only a timely fashion, but in a format that’s sufficiently intuitive for the decision-makers to use. Let’s face it – for new PMs with little more business acumen than what they accidentally swerved into while they were pursuing their engineering degrees, hitting them with a PERT chart or a Cost Performance Report in Format I in their first project review meeting doesn’t really help them select the most appropriate strategies in pursuing their projects’ objectives.
That having been said, is there an opposite, equally inappropriate extreme? There’s no doubt about it. And Exhibit A has to be the so-called Stop Light Chart.
For most people, a traffic light is a very handy tool for conveying critical information in a short period of time. Red equals stop, green is okay-to-go, and yellow means clear the intersection (unless you’re like the taxi driver who took me from Logan International Airport into downtown Boston, where it means accelerate to the max, even if you are up to ¼ mile from entering the intersection). For professionals who suddenly find themselves in the role of Project Manager (hey, they don’t call it “the accidental profession” for nothin’) without ever having actually taken a management science course, the reduction of critical cost and performance information into one of these three familiar colors on a readily available report is tempting in the extreme. Green indicates good or acceptable performance, yellow means pay attention here, and red is pinned on those projects/tasks that are currently in trouble – easy, right? Well, not so fast…
On what grounds, exactly, are these categories assigned? If you have a task at the reporting level with a positive cost and positive schedule variance, green is clearly called for, as is red for negative-negative. But what if you have a positive cost, negative schedule, but the cost is way positive (>15%), and the schedule is barely negative <5%)? Is that yellow?
Here’s my heartburn: by the time the reporting system is dumbed down to one of three or four colors (blue = completed on-time), all sorts of irrelevant information becomes the basis for color determination. Before you know it, the cost determiner is a comparison of budgets and actual costs, which, of course, is irrelevant to real PMs, but not to amateurs. A chilling trend among many project organizations is to perform schedule “analysis” based on whether or not the responsible PMs believe they will attain their milestones on-time. This, dear readers, is not legitimate project management information. It is polling, masquerading as usable cost and schedule performance data.
One of the most important aspects of successful project management – if not the most important aspect – is the ability to identify what the project’s issues are, and what aren’t. A project manager’s time is limited, and a performance information system that returns false positives, or masks genuine problems in overly generalized and synthesized formats, virtually invites mis-directed energies. Worse, it helps prevent authentic systems from being introduced, since the bogus systems give the appearance of being able to keep the decision-makers apprised of the project’s goings-on.
But, hey, my Boston taxi driver got me to where I was going, apparently without noticing what the traffic signals were telling him (and, seemingly, little information coming in from an accurate visual assessment of the traffic around us), so your project should be okay.