I know most of you will loath at the thought of ‘selling’ PMO. Perhaps, a couple of you would even feel disgusted just by hearing the word ‘selling’. “Yuck! It makes me feel like a filthy little pimp,” as some would say.
No doubt on that. No one will blame you if you have such a feeling as the word ‘selling’ itself may have been inappropriately represented. Most of us like to link the word ‘selling’ to the disdainful ‘dog and pony show’ that a typical salesman performs. This is where the problem lies. To me, I see it as having a more effective way of communicating the value of a PMO and not the salesman type of ‘selling’. It is just a word and you may call it anything you like as long as the objective is clear.
Up to this point, a few of you would probably start to argue that PMO should be driven by business needs; therefore, it is absolutely meaningless to try to ‘sell’ the PMO. Well, that again depends on what the word ‘sell’ represents. Now, if we ignore the word ‘sell’ for a moment and just focus on the objectives of creating awareness and helping others to have a better understanding on the services that PMO provides, does this whole thing make a little bit more sense?
One problem that I have observed is some people tend to believe that as long as PMO continues to deliver valued services that align and satisfy business needs then the value of PMO will be implied and apparent to the business. In other words, this is assuming that objective could be justified by outcome. It is as good as saying – “Work hard for the company and you will get rewarded.” How many of you actually believe in this statement? If this is true then the phrase ‘unsung hero’ would never have been created. In fact, this statement is not complete. The full statement should be – “Work hard for the company and ensure that you are recognized, and then you will get rewarded.” Similarly, some project managers like to assume that as long as the project is approved and endorsed by the management, their stakeholders will naturally understand the objective and value of it. This is definitely an invalid assumption. People are still questioning why we are running the project. We all know that. Yet, we have been repeating this assumption over and over again. This reminds me of the definition of ‘Acquired Taste’ that aptly describes this dilemma. According to Wikipedia,
“An acquired taste often refers to an appreciation for a food or beverage that is unlikely to be enjoyed by a person who has not had substantial exposure to it, usually because of some unfamiliar aspect of the food or beverage, including a strong or strange odor, taste, or appearance.”
Although the above definition is referring to food or beverage, it fits well in the PMO scenario too. On one hand, we have proponents suggesting the idea that PMO should justify for the value of its existence. If not, it should not even have been sanctioned in the first place. Ergo, it is a waste of time trying to ‘sell’ the value of PMO. On the other hand, we have opponents arguing that, analogous to the concept of ‘Acquired Taste’, we should not assume that everyone readily understands and appreciates the value of PMO. PMO is not like any other traditional departments (e.g. Sales, Finance and Human Resource). This is not because PMO is different, but more due to the reason that the three-letters acronym ‘PMO’ is rather new to many people. How many of you have been repeatedly questioned by colleagues what does that three-letters acronym stand for and what your department or team is doing? This is a real-life problem we are facing.
I do not intend to draw any conclusion here for the discussion. Instead, my aim is to highlight the much overlooked problem on how little knowledge most people have about PMO and its value. This is not about justifying the value of PMO. In fact, the problem lies in the poorly managed communication of PMO value in most cases. To ‘sell’ or not to ‘sell’ is the question. Yet, we ‘know’ or do not ‘know’ (the PMO value) is the actual problem. What do you think?
Most of us are familiar with the conventional risk management methods and models. The often cited risk identification, risk analysis, risk assessment, risk score, risk matrix and the quirky name of FMEA, all sounds so close to home. We have tinkered and struggled with them in our projects. What is not so clear is when we look at risks across several interrelated projects or within the context of a larger program. Things become more complicated when a risk may depend on or affect other risks. These new relationships introduce additional dynamics that may change the way we manage risk especially in the areas of risk identification and risk assessment.
Let’s first take a look at how it will affect the way we conduct risk identification. When risks are interrelated with one another, we can no longer deal with them individually as standalone records. We have to manage them collectively taking the relationships into consideration. Therefore, during the risk identification process, apart from identifying risks that will have an impact on the current project, we also need to determine if a specific risk has any dependency or influence on other risks. One option that we have is to tap into the knowledge and experience of the subject matter experts. Running a risk workshop involving the subject matter experts in the initial stage of the project or program may help to derive the first cut of the list of risks and the associated dependencies. Alternatively, we may also utilize the task dependencies as a convenient source to provide some references to aid in the identification of risk dependencies. Not only do we need to capture these relationships, we also need to provide some means to track the information for risk analysis and future reference (e.g. we may archive this knowledge into a risk bank or library). A quick solution is to add a dependencies field in the risk log, just like what we usually do for task dependencies in the Gantt chart, to keep track of relationships among the risks within a project and across multiple interrelated projects. We may also extend this further by plotting the risk dependencies on a map, similar to that in the Benefits Dependency Network, to allow us to visualize and analyze the dependencies holistically.
Next, let’s examine how risk assessment will be affected by these relationships. There are two parameters that we often use in risk assessment to determine the importance of a particular risk and the amount of time and effort that we should spend on it. They are the ‘Probability’ – the likelihood of the risk occurring, and the ‘Impact’ – the consequences if the risk does occur. When we look at a risk by itself in isolation, we are actually assessing the absolute values of these two parameters. This is what we have been practicing so far. However, these absolute values become less meaningful in a more complex environment involving the dependency relationships. In order to be more accurate in the assessment, we will then need to take the compounded effect of the risk dependencies into consideration. Now, the question is how should all these work?
For those of you who are familiar with Einstein’s Theory of Relativity, you should not be new to the concept of ‘Frame of Reference’. The theory states that – “There is no such thing as an absolute frame of reference.” Following this idea, we may postulate that we can never use the same absolute frame of reference to holistically assess the parameters of the risks with dependency relationships within a complex environment. Therefore, we need to introduce two new parameters ‘Relative Probability’ (RP) and ‘Relative Impact’ (RI) and rename the original parameters to ‘Absolute Probability’ (AP) and ‘Absolute Impact’ (AI) to provide better clarity. In addition, we also have to determine how each of these two factors will be affected by the dependencies and how they should be assessed.
Now, let’s first take a look at the ‘Relative Impact’ parameter which is the easiest between the two. In order to obtain the RI of the risk being assessed, we just need to sum up the RI of the dependee (a risk that is depended on by another risk) with the AI of the depender (a risk that depends on another risk). For example, if risk B with an AI of $1,000 has a dependency on risk A of $2,500 RI, then the RI of risk B will be the sum of the RI of risk A and the AI of risk B or $3,500 (i.e. $2,500+$1,000) in this case. In a multiple dependencies (a risk depends on more than one risk) situation, the ‘Relative Impact’ parameter should always be calculated based on the worst case scenario or the maximum combined impact of all the related risks. For example, if risk C with an AI of $1,000 has dependencies on risk A of $2,500 RI and risk B of $500 RI, then the RI of risk C will be $4,000 (i.e. $2,500+$500+$1,000). In general, we may express this calculation in a formula as shown below,
RI of depender = AI of depender + ∑ (RI of all dependees) --- (1)
If the above sounds relatively difficult for you to digest, then the method to calculate the ‘Relative Probability’ parameter will be a little bit trickier as it requires a good understanding in the field of probability taught in the math classes in old school. In a one-to-one dependency relationship, the RP of the risk being assessed (or we may think of this as the joint probability) will be the product of the RP of the dependee and the AP of the depender. For example, if risk B with an AP of 50% has a dependency on risk A of 80% RP, then the RP of risk B will be the product of the RP of risk A and the AP of risk B or %40 (i.e. 80%*50%) in this case. Unfortunately, the calculation for probability in a multiple dependencies situation is not so straight forward. From the old school math classes, we have learned that the calculation of union probability involves the combination of all the possible outcomes. In other words, the ‘Relative Probability’ parameter, which is a combined probability, should be calculated by summing up all the probabilities of each of the individual one-to-one dependency relationships. For example, if risk C with an AP of 50% has dependencies on risk A of 80% RP and risk B of 10% RP, then the RP of risk C will be the sum of the probability of ‘risk C depends on risk A’ dependency and the probability of ‘risk C depends on risk B’ dependency. The actual calculation is (80%*50%)+(80%*10%) which gives 48% as the RP of risk C. An easier way to calculate this is to take the total sum of the RP of all the dependees and multiply the result with the AP of the depender. This can be clearly expressed in the formula shown below,
RP of depender = AP of depender * ∑ (RP of all dependees) --- (2)
If you find the calculations described too confusing, all you need to do is just remember the two formulas (1) and (2) given above. One point to take note is, with this approach, we do not have to worry about any dependency relationship beyond the immediate dependee since we are taking the relative value of the parameter and it should have already accounted for everything upstream. This is the beauty of being ‘relative’.
I devoured over the beautiful works in quantum theory a few years back while I was rummaging through the mystical land of quantum mechanics. I bumped into Schrödinger’s cat along the way before I caught up with a precarious bloke by the name Heisenberg. The idea of an impossible dead and alive cat has taught me that a project cannot be both on schedule and delayed at the same time, but it was the madness of uncertainty in Heisenberg’s world that has since set me off in a pursuit of analyzing a similar problem we have in Earned Value Management (EVM). The Heisenberg Uncertainty Principle states that a fundamental limit on the accuracy with which certain pairs of physical properties of a particle can be simultaneously known since the more precisely one property is measured, the less precisely the other can be determined.
As most of us are familiar with, EVM is a project management technique for measuring project performance and progress in an objective manner combining measurements of scope, schedule and cost in a single integrated system in order to provide accurate forecasts of project performance problems. This can be achieved by working with three key project metrics – Planned Value (PV – budgeted cost for work scheduled), Earned Value (EV – budgeted cost for work performed) and Actual Value (AV – actual cost for work performed). EVM essentially projects and converts everything from ‘what you need to do’ to ‘how much time you need’ into dollars and cents so that it can be easily tracked and monitored (your finance department would be delighted to hear this).
An ingenious touch, isn’t it? But, wait a second. Didn’t uncertainty principle tell us that the more precisely one property is measured, the less precisely the other can be determined? Following along this argument, shouldn’t we expect that the more we try to assess a project from a cost perspective, the more we will lose sight of it from the scope and schedule perspectives? Indeed, this was exactly what Walt Lipke intended to address when he introduced Earned Schedule (ES), an extension to EVM, in his archetypical article “Schedule is different” in 2003. According to Walt, there is a fundamental problem with EVM: “At the completion of a project which is behind schedule, Schedule Variance (SV) is equal to zero, and the Schedule Performance Index (SPI) equals unity. We know the project completed late, yet the indicator values say the project has had perfect schedule performance.” What an exemplary Schrödinger’s cat paradox we are looking at. The main reason behind this quirky behavior is, unlike AV, EV has to be equal to PV at the completion of the project making it impossible to determine if the project is behind schedule or not. In fact, due to this reason, both SV and SPI become less meaningful as we progress nearer to the end of the project. In order to complement this shortcoming in EVM, Walt proposes a slightly modified way to compute SV and SPI in the Earned Schedule approach by projecting EV from cost into ES as time value and computing everything in time unit to obtain a new pair of time-based SV(t) and SPI(t). For those who are interested in the details of this approach, you may get everything you need from the Earned Schedule site.
So now we have all these models that serve as good indicators for the health of a project. The key question is – “Do they make us better in predicting project performance?” As much as we would like to encapsulate the whole kit and caboodle in the models, deep down inside, we know that they can never be 100% perfect. Project management is all about change and there are far too many factors that may affect the success of a project. The concern here is therefore, whether we should entrench ourselves so deep into the nitty-gritty of the quantitative models and miss out the whole picture of what project management should be. This is exactly the same concern that Jacques Olivier has on the global financial crisis in 2008 when he commented “The crisis is due in part to all the people who know how to count marbles but have no idea what those marbles mean.” One final word – at the end of the day, all these are just models. We are as blind as what these models can show us.
Many years ago, while I was still a project manager, my PMO Head invited an external consultant to take an inquisitive look into our internal ways of working within the PMO team. The main objective of the exercise was to identify existing problems and propose recommended areas for improvement. The consultant took a few days speaking with people, analyzing the existing framework, organizational process assets, best practices and templates and came to a conclusion. Below was what the consultant said to the PMO Head.
Consultant: “To be frank, you do not need any nerve-racking change. All you have to do is follow and ensure people follow. You already have everything you need.”
What the consultant actually implied was that the set of framework, templates and best practices that we had back then, though not perfect, were sufficient to get our day-to-day jobs done efficiently and effectively. The problems were not with the tools and processes; the problems came from people who refused to adhere to the established rules and standards. In other words, we were weak in both governance and compliance.
Dumbfounded by the consultant's conclusion, the PMO Head was obviously not convinced by the recommendation given. He insisted that the consultant was second-rate and failed to diagnose the problems to come up with appropriate solutions for the limping PMO. And so, the quest to search for an elixir continued…
I did not follow through the quest as I had since moved on to another organization. However, that incident reminds me of a typical scenario between a doctor and his patient. Whenever we fail to see improvement in our illness, the first thing we do is to blame the doctor. We question the capability of the doctor and we even doubt the effectiveness of the prescribed medicine. What we have not done, and probably will never do, is to ask ourselves if we have taken the doctor’s advices seriously to rest well, drink more water and keep up with the schedule on the pills. Just like the incident above, the problem is not because we have a lousy doctor; the problem is we have a stubborn patient who refused to listen and follow. In another words, what is the point if we keep on improving the processes and tools, but not changing the mindset of people? Problems like this are quite common in matrix-based organizations where project managers report directly to business and not to PMO. Without the right authority, it will not be easy for the PMO to enforce compliancy within and across project teams. Unfortunately, this is a common constraint that most PMOs have to live with.
Have you been a good patient lately?
We are all familiar with the well-known ‘As-Is’ and ‘To-Be’ model. Whenever we talk about change in an organization, we will first assess the ‘As-Is’ situation and compare it with the ‘To-Be’ state that we desire to become in order to define the gap needed to reach the end state. We usually refer to this process as ‘Gap Analysis’. This can be easily visualized in the diagram below.
One major output from the gap analysis is the ‘Must-Do’ actions in order to transform the organization from the current ‘As-Is’ state to the future ‘To-Be’ state. This is the main source for all the changes required that will eventually turn into a list of tactical action plans. Projects targeted at changing processes, tools and people will be spun off to implement the changes so as to achieve the tactical objectives. The diagram below shows the relationships of all these put together.
Vision communicated, funds approved, project teams formed, and we are all set to embark on our audacious journey of the ‘Great Revolution’. So far so good... Everybody is so excited about the future and the challenging projects ahead. But wait! If you have paused for a second and taken a step back to think for a while, you will realize that something is missing in the diagram above. Are you able to tell what is missing?
Sometimes, we are just too eager to jump into conclusion on ‘what’ needs to be done to achieve what we want to be. More often than not, we tend to neglect our current capability and what we actually ‘Can-Do’. The consequence is we usually end up missing a lot of targets by aiming too far and too much. One good example is the New Year’s resolutions that we set every year. “I think the problem people have is that they often set pretty unrealistic goals,” said Joseph Ferrari, a professor of psychology at DePaul University, with regards to the reason why most people failed to stick to their New Year’s resolutions. If someone had done a research on the factors affecting failed New Year’s resolutions versus failed projects, we might be able to discover some correlations and interesting insights between the two. According to Ferrari, people should be realistic and focus on small wins and successes. Paraphrasing this statement, it means that we should set targets/goals that are both realistic and achievable in smaller chunks. What does it mean by ‘achievable’? To be achievable, the target that we set must be something that we can reach within our current capability and this is an implicit assumption that we often missed – i.e. we assume that we are able to achieve the target that we set without first assessing our current capability. This is the exact missing piece in the diagram above – what we ‘Can-Do’.
No doubt, we know and appreciate the value of “Aim for the moon. If you miss, you may hit a star.” But this means that we would constantly miss our targets most of the time and would lead to undue demoralization and frustration. The situation usually gets worse if we repetitively experiencing misalignments between expectation (what we expect to achieve) and capability (what we are capable to achieve). Therefore, it is extremely important for us to ensure both our expectation and capability are always in sync. While defining what we ‘Must-Do’ in order to reach what we want ‘To-Be’, we should not forget to take into consideration on what we ‘Can-Do’ at the present ‘As-Is’ state. Putting all these together, we may arrive with an improvised model as shown in the diagram below.