Viewing Posts by Lynda Bourne
By Lynda Bourne
Ptolemy's world map (source: Wikipedia)
Do modern project managers and their clients rely on their charts and reports too much? We all know that project schedules, cost reports, risk assessments and other reports are produced by sophisticated computer software, these days increasingly enhanced by artificial intelligence. But does this sophisticated processing mean the charts are completely reliable?
The modern world is increasingly reliant on computer systems to direct and control many aspects of life—from self-driving cars, to autonomous warehouses, to the flight control systems in aircraft. But can this reliance on computer systems be translated to project controls information, or do we need a more ancient mindset?
Modern navigators rely on the accuracy of their GPS to know exactly where they are and where they are going. The autopilots are better than the human, but the data being used is precise and validated.
The same level of reliability and accuracy cannot be applied to project controls data. Every estimate is an assessment of what may occur in the future based on what happened in the past. Even when a sophisticated risk model is built, the P80 or P90 result is based on subjective range estimates taken from past events.
The future may unfold within the expected parameters, and it may not. We simply cannot determine the future in advance. While the quality of the project predictions is based on the quality of the data being used in the modelling processes (and the only guaranteed fact is the model will be incorrect), predictions do not control the future. The key question is: How useful are the models in helping navigate the project through to a successful conclusion? [Remember GIGO (garbage in, garbage out)?!]
In days gone by, navigators did not need accurate charts and satnav systems to reach their destinations. The Viking and Polynesian navigators crossed thousands of miles of open ocean to land on small islands using observations of the natural environment and tacit knowledge passed down from earlier generations. They knew certain seabird species only ventured relatively short distances from land, how clouds formed and changed over land, etc., augmented by primitive technologies.
Fast-forward a few centuries, and the early European navigators (Columbus, Magellan, Drake, Cook and countless others) had steadily improving charts that made navigating easier—but they also knew the best charts available were not accurate. The general shape of the world had been mapped since the time of Ptolemy (circa 150 CE), and as better information became available, better maps and charts were created. But these are still continuing to be improved into the 21st century.
So how did people navigate the globe without accurate maps and charts? I suggest there were four core elements in the approach, all of which can be applied to modern project management:
To move from assuming controls information is correct, to seeing it as a useful guide that can be improved as better knowledge becomes available, requires a paradigm shift in thinking that sits comfortably alongside many of the concepts of agile.
The future is inherently uncertain and we can learn a lot from the way early navigators used imprecise charts to sail the oceans. Navigating the globe in past centuries and leading a project to a successful conclusion are both risky endeavours; this fact needs to be accepted, and the risks minimized by using the best available charts—while being aware of their limitations.
What do you think?
By Dr. Lynda Bourne
The generally accepted way of assessing progress on a project, and predicting its completion, is to use a critical path method schedule. However, the CPM paradigm does not work across a wide range of projects where there is no predetermined sequence of working that must be followed. There may be a high level “road map” outlining the desired route to completion and/or specific constraints on the sequencing of parts of the work but in most agile projects, the people doing the work have a high degree of flexibility in choosing the way most of the work is accomplished.
The focus of this post is to offer a practical solution to the challenge of assessing progress, and calculating the likely completion date in agile projects.
WPM as an Alternative to ES and CPM
The function of WPM is to assess progress and calculate a predicted completion date in a consistent, repeatable, and defensible way by comparing the amount of work achieved at a point in time with the amount of work planned to have been achieved at the same point in time. Then based on this data, you calculate an expected completion date.
The Theoretical Basis of WPM
From this information, the work performance measures are calculated as follows:
Applying WPM to a Project Using Scrum
This leaves 16 weeks for productive work; therefore, the first stories should be delivered at the end of the first productive sprint, Week 4, and all stories by the end of Week 18.
This means the rate of planned production between the start of Week 2 and the end of Week 18 is 86/16 = 5.375 story points per week. Based on these assumptions, at the end of Week 4 (two weeks of production), we can expect 10+ story points to be complete, and at the end of Week 18 all 86 story points complete. The rest of the planned distribution is simply a straight line between these two points.
We know sprints will not take exactly two weeks every time (some will overrun, and occasionally some will finish early), and we also know the number of story points generated in each sprint will vary. But on average, if the two sprint teams together are not completing a bit over 5.3 story points per week, every week, the project will finish late.
Once this basic rate of production has been determined for the project, WPM measures the actual work delivered (WA) and shows the time variance at time now (TN) and uses this information to predict the expected completion (EC).
For example, at the end of Week 8, three sprints should have been completed by both teams, and we are expecting 30 story points complete. But only 23 have been delivered. Velocity calculation will indicate more sprints will be needed, and the burndown chart will show the work is behind plan. But what does this mean from a time perspective?
A look at the planned rate of production will show 23 story points should have been finished during Week 7 (the actual fraction is 7.3). Therefore, the work is 0.7 weeks (3.5 working days) late. The work performance index (WPI) is 0.9125.
Dividing the original duration (20 weeks) by the WPI suggests the revised duration for the project is 21.9178 weeks; the variance at completion is -1.9178 weeks, or 13.4 calendar days late.
If these calculations look similar, they are based on the well-tried formula used in earned value management and earned schedule—all I’ve done is shift the metric to a direct measure of the work performed.
The two requirements to implement WPM are:
The metric used can be a core deliverable (e.g., 2,000 computers replaced in an organization), or a representation of work such as “story points,” or the monetary value of the components to be delivered to the client.
Peripheral and support activities can usually be ignored when establishing the WPM metric; they rarely impact the project delivery independently. Failures in the support areas typically manifest in delays to the primary delivery metric.
By Lynda Bourne
Agile in its various forms is becoming mainstream, and this means an increasing number of commercial contracts are being delivered by contractors who either choose, or are required, to use an agile methodology to create their contracted deliverables. While this is probably a good thing, this shift in approach can cause a number of problems. This post is a start in looking for practical solutions to some of these issues.
Two of the core tenets of agile are welcoming change to add value, and working with the client to discuss and resolve problems. While these are highly desirable attributes that should be welcomed in any contractual situation, what happens when the relationship breaks down, as it will on occasion?
The simple answer is that every contract is subject to law, and the ultimate solution to a dispute is a trial—after which a judge will decide the outcome based on applying the law to the evidence provided to the court. The process is impartial and focused on delivering justice, but justice is not synonymous with a fair and reasonable outcome. To obtain a fair and reasonable outcome, evidence is needed that can prove (or disprove) each of the propositions being put before the court.
The core elements disputed in 90% of court cases relating to contract performance are about money and time. The contractor claims the client changed, or did, something(s) that increased the time and cost of completing the work under the contract; the client denies this and counterclaims that the contractor was late in finishing because it failed to properly manage the work of the contract.
The traditional approach to solving these areas of dispute is to obtain expert evidence as to the cost of the change and the time needed to implement the change. The cost element is not particularly affected by the methodology used to deliver the work; the additional work involved in the change and its cost can still be determined. Where there are major issues is in assessing a reasonable delay.
For the last 50+ years, courts have been told—by many hundreds of experts—that the appropriate way to assess delay is by using a critical path (CPM) schedule. Critical path theory assumes that to deliver a project successfully, there is one best sequence of activities to be completed in a pre-defined way. Consequently, this arrangement of the work can be modeled in a logic network—and based on this model, the effect of any change can be assessed.
Agile approaches the work of a project from a completely different perspective. The approach assumes there is a backlog of work to be accomplished, and the best people to decide what to do next are the project team members when they are framing the next sprint or iteration. Ideally, the team making these decisions will have the active participation of a client representative, but this is not always the case. The best sequence of working emerges; it is not predetermined.
There are some control tools available in agile, but diagrams such as a burndown (or burnup) chart are not able to show the effect of a client instructing the team to stop work on a feature for several weeks, or adding some new elements to the work. The instructions may have no effect (the team simply works on other things), or they may have a major effect. The problem is quantifying the effect to a standard that will be accepted as evidence in court proceedings. CPM has major flaws, but it can be used to show a precise delay as a specific consequence of a change in the logic diagram. Nothing similar seems to have emerged in the agile domain.
The purpose of this post is twofold. The first is to raise the issue. Hoping there will never be a major issue on an agile project that ends up in court is not good enough—hope is not a strategy. The second is to see if there are emerging concepts that can address the challenge of assessing delay and disruption in agile projects. Do you know of any?
By Lynda Bourne
It is important that both professionals, and the organizations that employ them, are socially and environmentally aware—and act responsibly to protect the rights of others. The financial consequences of failing to be socially aware started to be felt in the 1950s. Around this time, investors started excluding stocks, or entire industries, from their portfolios based on business activities such as tobacco production or involvement in the South African apartheid regime.
These considerations developed into the concept of environmental, social, and corporate governance. Today, ESG is an umbrella term that refers to specific data designed to be used by investors for evaluating the material risk that the organization is taking on based on the externalities it is generating.
The term ESG was popularly used first in a 2004 report titled Who Cares Wins, which was a joint initiative of financial institutions at the invitation of the United Nations. Then the UN’s 2006 report Principles for Responsible Investment (PRI) required ESG to be incorporated into the financial evaluations of companies.
Under ESG reporting, organizations are required to present data from financial and non-financial sources that shows they are meeting the standards of agencies such as the Sustainability Accounting Standards Board, the Global Reporting Initiative, and the Task Force on Climate-related Financial Disclosures. The data must be made available to rating agencies and shareholders.
Corporate social responsibility is the flip side of ESG. CSR is the belief that corporations have a responsibility toward the society they operate within. This is not a new idea; it is possible to trace the concerns of some businesses toward society back to the Industrial Revolution and the work of primarily Quaker business owners to provide accommodation and reasonable living standards for their workers.
However, it was not until the 1970s that concepts such as social responsibility of businesses being commensurate with their power, and business functions by public consent, started to become mainstream. Today, CSR is a core consideration for most ethical businesses.
These concepts were turned into a structured set of guidelines in 1981, when Freer Spreckley suggested in Social Audit - A Management Tool for Co-operative Working that enterprises should measure and report on financial performance, social wealth creation, and environmental responsibility.
These ideas have become the triple bottom line (TBL), which is considered essential to effective organizational governance these days. Most of the major corporate governance frameworks require the TBL to be included in corporate reporting.
In his foreword to Corporate Governance: A Framework – Overview (prepared by the World Bank in 2000), Sir Adrian Cadbury summarized these objectives in his statement: "The aim is to align as nearly as possible the interests of individuals, corporations, and society."
Similar concepts to the TBL also form a core component of most codes of ethics and professional conduct. For example, the current version of PMI’s Code of Ethics and Professional Conduct incudes:
So far, so good. There has been a simple set of unambiguous requirements in place for 30-plus years that are straightforward and easy to understand. These simple (if difficult to achieve) concepts have been refined to make consideration of environmental (sustainability), social and financial outcomes important in every decision-making process, including those affecting the organization’s projects.
However, having become a hot topic for boards, investors and managers alike in the last couple of years, these ideas seem to be disappearing into a blizzard of acronyms that appear to be more about differentiating a consultant’s services than adding value. Some of the newer acronyms include:
My concern is that while the concepts defined by each of the acronyms above are of themselves valuable (once you work out what they mean), and a few—such as the UN’s sustainable development goals—add substantially to the TBL framework, most are either sub-sets of the overarching objectives defined in PMI’s Code of Ethics and Cadbury’s simple statement, or essentially cover the same concepts.
Do all of these extra acronyms add to the core objective of improving outcomes for people and the environment or not? What do you think?
 Download the 2004 repot from: https://www.unepfi.org/fileadmin/events/2004/stocks/who_cares_wins_global_compact_2004.pdf
 Spreckley, Freer (1981). Social Audit: A Management Tool for Co-operative Working. Beechwood College.
By Lynda Bourne
A couple of days ago, I received a survey from PMI asking about portfolio management. There’s nothing unusual about PMI undertaking a survey, but the types of project management approaches mentioned for the projects in the portfolio gave me cause for concern. The three choices offered were Agile, Waterfall and Other.
My response was ”Other”—the portfolios I have direct experience with involve heavy engineering. Here is my perspective on the options offered by PMI:
Agile: A well-defined flexible process, based on the Agile Manifesto, applicable to software development and a wide range of other “soft projects” such as business change.
Waterfall: A five-stage software development methodology from the 1970s focused on designing a product (based on requirements) before starting development. The waterfall methodology is still used in some software development projects, but has never been applied to other types of projects.
Other: The vast majority of projects in the construction, engineering, oil & gas, defense, and aerospace industries based on the approaches described in A Guide to the Project Management Body of Knowledge (PMBOK® Guide)—Sixth Edition.
These “other” projects generally have three phases:
The design of the product (ship, building, rocket, etc.) may be undertaken in full or in part during any one of the three phases. A minimum level of design is required to initiate procurement, but for simple buildings and civil engineering projects, it is not unusual for a complete design and specification to be provided by the client.
The procurement phase may be a simple pricing exercise, or a complex and phased design process (sometimes even involving the production of working prototypes), with selection being based on the capabilities of the design produced by the successful tenderer. In many projects, a significant amount of detailed design is still required during the delivery phase, including shop drawings produced by subcontractors and suppliers.
Similarly, the procurement arrangements vary widely. The client may choose to enter into some form of alliance or partnership with the preferred delivery agent based on shared risk and profits, or the client may choose a hard-dollar contract based on a fixed price to deliver a fixed scope, or some other setup. There are multiple forms of contract arrangement.
The only certainties are that the typical project approaches used for the vast majority of “other” projects bear no resemblance to the waterfall approach, and this “other” classification includes more than two-thirds of the world’s projects by value.
So, my questions are:
How should different types of project management be described? Your thoughts and ideas are welcome.