Project Management

Disciplined Agile Applied

This blog explores pragmatic agile and lean strategies for enterprise-class contexts.

About this Blog


Recent Posts

#NoFrameworks at Agile Middle East 2020

The Art of Guesstimation: Why Shouldn’t We Estimate?

The Art of Guesstimation: Why Should We Estimate?

The Art of Guesstimation: Estimation on Software Projects

Planning: The Efficiency Zone

Planning: When Have We Done Enough?

Categories: Planning

In the previous blog we examined what the value of planning is, so that we can then determine how much planning we need to do.  The answer to this question was “it depends,” and the implication was that we need to do just enough planning for the situation that we face and no more.  In this blog we go deeper to explore what this depends on in practice.

We learned in the previous blog that our planning efforts should be sufficient, what we referred to as just barely good enough (JBGE) in Agile Modeling, for the situation that we face.  The following figure depicts the contextual factors that we should consider, with the factors motivating us to do more planning on the left-hand side under the red arrow and the factors enabling us to do less planning on the right-hand side under the green arrow.  These factors are mostly qualitative in nature, implying that it requires a judgement call on the part of the people involved with the planning effort to determine whether they’ve planned sufficiently. Let’s explore each of these factors in more detail.

Figure. The factors to determine whether you’ve planned sufficiently.

There are four factors that motivate us to increase the amount of planning we do:

  1. Risk. The greater the risk that we face the more we will want to think before we act.  For example, transporting perishable medical supplies from Toronto to Timbuktu requires more logistical planning than transporting a box of Choose Your WoW! books from Toronto to Philadelphia.
  2. Domain complexity.  The more complex the problem that we’re trying to solve, the more we want to invest in exploring the domain and then thinking through how we will address the complexity.  For example, building a bridge across a river is more complex than building a dog house, so we will invest more time thinking through the building of the bridge.
  3. Solution complexity. The greater the complexity of the solution that we are building, or configuring in some cases, the more thinking we will need to do.  For example, installing an ERP system into an existing organization to replace dozens of legacy systems requires more planning than installing a new app on your phone.
  4. Desire for “predictability.” This can be a common cultural challenge within organizations, in particular the desire to be told up front how much a project will cost or how long a project will take.  These are fairly reasonable requests when the endeavor is something our team has experience in doing, such as a house builder being asked to build a new house. They are unreasonable requests when the team is being tasked with doing something that is ill-defined or whose scope will change throughout the endeavor, the situation typically faced by software development teams.  Far too many times I’ve seen teams run aground due to an unrealistic request for predictability – the teams will overly invest in up-front modeling and planning, will then make promises regarding cost and schedule that either they’re unable to keep or can only keep by cutting scope or quality of the delivered solution.

There are six factors that enable us to reduce the amount of planning that we need to do:

  1. Skill.  The greater the skill of the people doing the work, the less planning will be required for that work. 
  2. Experience.  The greater the experience of the people doing the work, the less planning will be required for that work.
  3. Ease of change. The easier it is to change the work products being produced, including the solution itself, the less planning is required.  For example, we are developing a website it is relatively easy to update and then redeploy web pages if we discover that they need to evolve.  So we can get away with minimal planning.  Conversely, if we are building a bridge spanning a river it is relatively difficult and expensive to rebuild a supporting pillar in the middle of the river if we discover that it was placed in the wrong spot. In this case we face greater deployment risk so we need to invest in greater planning to increase the chance of getting things right.
  4. Access to stakeholders.  The easier it is to access stakeholders, in particular decision makers who can provide direction and feedback to us, in general the less initial planning we need to do.  Being able to work closely with stakeholders enables us to think through the details when it’s most appropriate during a project, not just up front.
  5. Communication and collaboration. The greater the communication and collaboration within a team, perhaps because we’re co-located in a single room or because we are using sophisticated communication software, the less planning is required due to the increased opportunities for streamlined coordination. 
  6. Uncertainty. The more likely that something will change the less planning you should do because when it does change your planning efforts around it will have been for naught. 

To summarize, the answer to “when have we planned sufficiently?” is “it depends.”  In this blog we explored several factors that motivate you to increase the amount of planning we need to do and several factors that enable us to reduce the amount of planning.  In effect we went beyond the typical consultant answer of “it depends” to the more robust answer of “it depends on this.”  

In the next blog in this 3-part series we explore how to be efficient in our planning efforts. I suspect the answer it won’t be what you’re expecting.

Posted on: November 04, 2019 09:42 AM | Permalink | Comments (12)

Planning: How Valuable is it?

Categories: Planning

Winston Churchill once said “Plans are of little importance, but planning is essential.” What Churchill meant was that the value is in thinking something through, in planning it, before you do it. So this leads to some interesting questions: How much planning should we do?  How can we get the most value out of our planning efforts? I wish I could tell you that there is solid research evidence to answer this question but sadly I haven’t been able to find any (if you know of any, I’d love to hear about it). Luckily though, we do have a lot of experience and observational evidence to fall back on.

From an accounting point of view we know that value is calculated as benefit minus cost. The implication is that all we need to do then is calculate the benefits of planning, and the costs of doing so, apply a bit of math and there you go.  How hard could that be? As we know it’s very difficult to calculate the benefits because some are qualitative, requiring a bit of creativity to turn them into a monetary figure. The bigger challenge is that planning is just one activity of many that go into achieving a benefit making it difficult to tease out the planning portion of the benefit earned. Calculating the true costs of planning isn’t much easier when you start to consider the downstream implications of the work required to gather the inputs that go into the planning process.  This is a particular problem in creative domains such as software development, more in this in future blogs.  The implication is that in practice there isn’t an easy way to determine the actual value of planning, which explains the dearth of research evidence around this issue.  

Because there are no hard and fast rules to determine the value of planning, practitioners rely on belief systems to guide their thinking.  Figure 1 overviews two common belief systems, the traditional (sometimes inappropriately called “predictive”) belief system and the agile belief system.  We’ve labelled the latter as undisciplined because we will distinguish this from a Disciplined Agile (DA) strategy later.  The traditional belief system tells us that the more effort that you put into planning the more value that you will gain from doing so.  This belief system tells you to think things through in detail before you do them.  This increases value by avoiding mistakes in the future, thereby reducing costs. At the other extreme the (undisciplined) agile belief system tells us that there is very little value in planning, that you are better advised in focusing on being able to react to feedback.  This enables you to deliver sooner and thereby earn the actual benefits sooner and longer, thereby increasing value.  The obvious downside of course is that mistakes will be made, sometimes serious ones, thereby increasing both costs and risks. 

Figure 1. What our belief systems tell us about the value of planning.

The methodologist in me tells me that extremes are generally a bad thing, that for most situations the answer lies somewhere in between.  The easy answer would be to simply draw a dashed line in between the two curves in Figure 1, or get really fancy and introduce some sort of range between the two lines, but the easy answer is wrong.   What we really need to do is look at what actually happens in practice, something that we did in the Agile Modeling (AM) community in the early 2000s.

In AM we needed to determine the value of modeling so as to provide coherent advice around when to model and to what extent to model. To make a long story short, we observed that the value of modeling followed the law of diminishing returns as you can see in Figure 2.  A little bit of modeling offered a lot of value.  So did a bit more, then a bit more, then a bit more.  But very quickly we reach a point of diminishing returns, the total costs of modeling soon exceeds the total benefits of modeling – once a model is sufficient for the situation that you’re thinking through with it, you reach a point where any more modeling removes overall value.  This point is what we called the just barely good enough (JBGE) point, although others prefer “sufficient” as a term.  So why am I talking about the value of modeling? Because modeling and planning are slightly different flavors of the same thing: Thinking something through before you jumping into doing it.

Figure 2. The value of planning/modeling.


JBGE?  What?

Many people, particularly those who follow a traditional “predictive” lifecycle approach, tend to be taken aback when they’re told the most effective plans are those that are just barely good enough.  The problem is that they often interpret JBGE as insufficient, yet by definition that is clearly not the case.  If your planning efforts haven’t been sufficient then you can still benefit from more planning, or to be accurate you can benefit from good planning at the right time.  But, if your planning efforts have been sufficient, or more than sufficient, then more planning is only going to remove value.  From a lean point of view over-planning is a waste.  

To summarize, the DA belief system is that the value of planning and modeling follows the law of diminishing returns. There is significant anecdotal evidence that bears this out but as I indicated earlier the research evidence is sparse.  I’ve actively prodded researchers along when I could for the past 15 years to get this evidence, but these efforts have always floundered when they discovered that this research is very difficult long-term work.  

In the next blog in this 3-part series we explore how to determine when your planning efforts are sufficient.  Following that we work through how to be efficient at planning.


Posted on: October 24, 2019 11:59 PM | Permalink | Comments (16)

Measuring What Counts When Context Counts

Categories: GQM, Metrics, OKRs

In my blog Disciplined Agile Principle: Optimize Flow I wrote the following:

Measure what counts. When it comes to measurement, context counts. What are you hoping to improve? Quality? Time to market? Staff morale? Customer satisfaction? Combinations thereof? Every person, team, and organization has their own improvement priorities, and their own ways of working, so they will have their own set of measures that they gather to provide insight into how they’re doing and more importantly how to proceed. And these measures evolve over time as their situation and priorities evolve. The implication is that your measurement strategy must be flexible and fit for purpose, and it will vary across teams.

Based on that I received the following question:

Regarding number 7 above: Measure what counts. I think that's really important. How would you handle the need for some larger organizations to compare teams, directorates, divisions etc. with uniform metrics? Each effort is so different and will require different metrics.

This is a great question that deserves a fairly detailed answer, so I thought I'd capture it as a new blog posting.  Here are my thoughts.

  1. Using metrics to compare teams, directorates, ... is a spectacularly bad idea. The problem is that when you start using metrics to compare people or teams you motivate them to game the measures (they make the numbers appear better than they are).  Once people start to game their measures your metrics program falls apart because you're now receiving inaccurate data, preventing you from making informed decisions.  Even if you're not using metrics to compare people all it takes is the perception that you might be doing so and they'll start to game the measures where they can.  Luckily many measures are now collected in an automated manner from the use of our tools, making them much harder to game.  But you can't measure everything in an automated manner and any manually collected metric is at risk of being gamed, so sometimes you just need to accept the risks involved. BTW, using metrics to compare people/teams has been considered a bad practice in the metrics world for decades.   
  2. Use metrics to inform decisions.  I'm a firm believer in using metrics to make better-informed decisions, but that only works when the quality of the data is high.  So focus your metrics program on providing you with operational intelligence to enable your staff to better manage themselves.  Furthermore, metrics can and should be used to provide senior management with key information to govern effectively.  This implies that you need to make at least a subset of a team's metrics visible to other areas in your organization, often rolling them up into some sort of program or portfolio dashboard. We can choose to be smart about that.
  3. Applying uniform metrics across disparate teams is a spectacularly bad idea. The DA principle Context Counts tells us that to be effective different teams in different situations will choose to work in different ways. To enforce the same way of working (WoW) on disparate teams is guaranteed to inject waste (hence cost and risk) into your process.  Either teams will work in a manner that isn't effective for the situation that they face or they will work in an appropriate manner PLUS perform additional work to make it appear that they are following the "one official process."  Agile enterprises allow, and better yet enable,  teams to choose their own WoW and evolve it over time as their situation evolves (and it always does).  
  4. You can apply common categories or improvement goals across teams.  Having said all this there is always a desire to monitor teams in a reasonably consistent manner.  Instead of inflicting a common set of metrics across all your teams you should instead tell them that you want them to measure a common set of issues that you hope to improve and provide a mechanism to roll up a score in a common way.  For example, say your goal is to improve customer satisfaction. You could inflict a common metric across your teams, say net promoter score (NPS), and have everyone measure that.  That will make sense for some teams but be misaligned or simply inappropriate for others. There are many ways to measure customer satisfaction, NPS being one of them, and each one works well in some situations and not so well in others. Recognizing that once metrics strategy won't work for all teams, negotiate with them that you want to see X% improvement in customer satisfaction over a period of Y and leave it up to them to measure appropriately.  Setting goals/objectives is a fundamental concept for metrics strategies such as Goal Question Metric (GQM) and Objectives and Key Results (OKRs).   If you want to roll metrics up into a higher-level dashboard or report then you can also ask the teams to have a strategy to convert their context-sensitive metric(s) into a numerical score (say 1 to 10) which can then be rolled up and compared.  Just joking, comparison is still a bad idea, I'm just seeing if you're paying attention.  The downside of this approach is that it requires a bit more sophistication, particularly on the part of anyone in a governance position who wants to drill down into the context specific dashboards of disparate teams. The benefit is that teams can be provided the freedom to measure what counts to them, providing them with the intelligence that they require to make better informed decisions. See the article Apply Consistent Metric Categories Across an Agile Portfolio for a detailed description of this strategy. 
  5. Regulatory concerns may force you to collect a handful of metrics in a uniform way. It happens.  This is typically true of financial metrics, at least within a given geography.  For example, your organization will need to identify a consistent interpretation to how to measure CAPEX/OPEX across teams, although I've also seen that vary a bit within organizations for very good reasons.

To summarize, it's a really bad idea to inflict a common set of metrics across teams. A far better strategy is to have common improvement objectives across teams and allow them to address those objectives, and to measure their effectiveness in doing so, as they believe to be the most appropriate for their situation.


Posted on: October 22, 2019 11:59 PM | Permalink | Comments (5)

Disciplined Agile Principle: Context Counts

Categories: Fundamentals, Principle

One of the seven principles behind Disciplined Agile (DA) is Context Counts. Every person is unique, with their own set of skills, preferences for workstyle, career goals, and learning styles. Every team is unique not only because it is composed of unique people but also because it faces a unique situation. Your organization is also unique, even when there are other organizations that operate in the same marketplace that you do. For example, automobile manufacturers such as Ford, Audi, and Tesla all build the same category of product yet it isn’t much of a stretch to claim that they are very different companies. These observations – that people, teams, and organizations are all unique – leads us to a critical idea that your process and organization structure must be tailored for the situation that you currently face. In other words, context counts.


Figure 1 overviews the potential factors that you should consider regarding the context of the situation faced by your team.  We’ve organized them into two categories:

  1. Selection factors that drive your initial choices around your high-level way of working (WoW) and in particular your choice of initial lifecycle.
  2. Scaling factors that drive detailed decisions around your team’s WoW.

Of course it’s never this straightforward.  Selection factors will have an effect on your detailed WoW choices and scaling factors will also have an impact on your initial decisions.  Our point is that in general the selection factors have a bigger impact on the initial choices than do the scaling factors and similarly the scaling factors have a bigger impact on your detailed tailoring decisions than do the selection factors.

Figure 1. Potential context factors (click to enlarge).

Context factors are interdependent.  Figure 2 shows the major relationships between the context factors.  For example, you can see that:

  • As domain complexity rises the skills required to address that complexity also rise (harder problems require greater skill to solve).
  • As team member skills increase the size of the team required to address the problem it faces can shrink (a small team of skilled people can do the job of a larger team of lower-skilled people).
  • Your organizational culture and your team culture tend to affect one another, hopefully positively.
  • Your team culture will vary by organization distribution (your team will have a different culture than that of teams in a different division of your organization, or of teams in a different company).
  • The more organizationally distributed your team becomes the greater the chance that it will be geographically distributed as well.  For example, if you are outsourcing some of the work to another organization the people doing that work may be in another, lower-cost country.

Figure 2. Relationships between context factors (click to enlarge).


Let’s explore the scaling factors a bit. As we mentioned earlier, the scaling factors tend to drive your detailed decisions around your way of working (WoW). For example, a team of eight people working in a common team room on a very complex domain problem in a life-critical regulatory situation will organize themselves differently, and will choose to follow different practices, than a team of fifty people spread out across a corporate campus on a complex problem in a non-regulatory situation. Although these two teams could be working for the same company they could choose to work in very different ways.

Figure 3 depicts the scaling factors as a radar chart, sometimes called a spider chart. There are several interesting implications:

  • The further out you go on each spoke the greater the risk faced by a team. For example, it’s much riskier to outsource than it is to build your own internal team. A large team is a much riskier proposition than a small team. A life-critical regulatory situation is much riskier than a financial-critical situation, which in turn is riskier than facing no regulations at all.
  • Because teams in different situations will need to choose to work in a manner that is appropriate for the situation that they face, to help them tailor their approach effectively you need to give them choices.
  • Anyone interacting with multiple teams needs to be flexible enough to work with each of those teams appropriately. For example, you will govern that small, co-located, life-critical team differently than the medium-sized team spread across the campus. Similarly, an Enterprise Architect who is supporting both teams will collaborate appropriately with each.

Figure 3. Tactical scaling factors faced by teams.


The leading agile method Scrum provides solid guidance for delivering value in an agile manner but it is officially described by only a sixteen page guide. Disciplined Agile recognizes that enterprise complexities require far more guidance and thus provides a comprehensive reference framework for adapting your agile approach for your unique context in a straightforward manner.  Being able to adapt your approach for your context with a variety of choices (such as those we provide via goal diagrams) rather than standardizing on one method or framework is a very good thing.


This article is excerpted from Chapter 2 of the book An Executive’s Guide to Disciplined Agile: Winning the Race to Business Agility.


Posted on: October 18, 2019 12:00 AM | Permalink | Comments (6)

Disciplined Agile Principle: Optimize Flow

Categories: Fundamentals, Principle

Optimize Flow

One of the seven principles behind Disciplined Agile (DA) is Optimize Flow.  Your organization is a complex adaptive system (CAS) of interacting teams and groups that individually evolve continuously and affect each other as they do. The challenge that we face is how do we ensure that these collaborating teams do so in such a way as to effectively implement our organization’s value streams? How do we ensure that these teams are well aligned, remained well aligned, and better yet improve their alignment over time?

The implication is that as an organization we need to optimize our overall workflow. The DA toolkit supports a large number of strategies to do so:

  1. Deliver continuously at a sustainable pace. The Disciplined Agile Manifesto advises teams to deliver consumable solutions frequently, from a couple of weeks to a couple of months, with a preference to the shorter time scale. This philosophy is one of four, in this case Deliver, promoted by the Heart of Agile. Similarly it is one of four philosophies of Modern Agile, in this case Deliver Value Continuously, and it is a fundamental strategy of Disciplined DevOps. Since 2001 agilists have shown that it is possible to deliver high-quality systems quickly. By limiting the work of a team to its capacity, which is reflected by the team’s velocity (this is the number of “points” of functionality which a team delivers each iteration), you can establish a reliable and repeatable flow of work. An effective organization doesn’t demand teams do more than they are capable of, but instead asks them to self-organize and determine what they can accomplish. Enabling these teams to delivering potentially shippable solutions on demand motivates them to stay focused on continuously adding value.
  2. Optimize the whole. Disciplined agilists work in an “enterprise aware” manner – they realize that their team is one of many teams within their organization and as a result they should work in such a way as to do what is best for the overall organization and not just what is convenient for them. More importantly they strive to streamline the overall process, to optimize the whole as the lean canon advises us to do. This includes finding ways to reduce the overall cycle time, the total time from the beginning to the end of the process to provide value to a customer, is a key part of doing so.
  3. Make work flow. The 14th principle of the DA Manifesto is to visualize work to produce a smooth delivery flow and keep work-in-progress (WIP) to a minimum. This strategy enables teams to identify and then remove bottlenecks quickly and is adopted straight out of Kanban.
  4. Eliminate wasteLean thinking advocates regard any activity that does not directly add value to the finished product as waste. Waste includes time waiting for others to get something done, creation of unnecessary work artifacts or product features, and collaboration churn resulting from crossing organizational boundaries. To reduce waste it is critical that teams be allowed to self organize and operate in a manner that reflects the work they’re trying to accomplish.
  5. Improve continuously. As a leader you want to promote a culture of continuous improvement, including the sharing of skills and knowledge between people and teams, within your organization. This is seen as a fundamental philosophy of agile – The 12th principle behind the Agile Manifesto is “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly” and both Improve and Reflect are principles of the Heart of Agile. A key technique that supports continuous improvement is “double-loop learning” that promotes the idea that you modify your approach based on what you learn from your experiences.
  6. Experiment to learn. Probably the most significant impact of Eric Ries’ work in Lean Startup is the popularization of the experimentation mindset, the application of fundamental concepts of the scientific method to business. This mindset can be applied to process improvement following what Ries calls a validated learning strategy. From a process point of view, the strategy is to first identify an improvement hypothesis along the lines of “We think doing X will improve Y”. Second, run a short experiment by trying it out in a controlled manner, with measurements in place to see the effect of the change. Third, observe what happens to determine the efficacy of X and whether you need to evolve X and run a follow up experiment (double-loop learning). An experimentation mindset reinforces and often speeds up the strategy of continuous learning. As we pointed out earlier, to enable an experimentation mindset within your organization as a leader you must establish a safe environment where experimentation is encouraged and rewarded.
  7. Measure what counts. When it comes to measurement, context counts. What are you hoping to improve? Quality? Time to market? Staff morale? Customer satisfaction? Combinations thereof? Every person, team, and organization has their own improvement priorities, and their own ways of working, so they will have their own set of measures that they gather to provide insight into how they’re doing and more importantly how to proceed. And these measures evolve over time as their situation and priorities evolve. The implication is that your measurement strategy must be flexible and fit for purpose, and it will vary across teams.
  8. Prefer long-lived stable teams. A very common trend in the agile community is the movement away from projects, and the project management mindset in general, to long-lived teams. Such teams evolve over time, people occasionally join the team and people occasionally leave the team, but the team itself may run for years. For example, Microsoft has had a team developing and sustaining Microsoft Word since 1981 with no end in sight.  It’s important to note that this move away from project management in the agile community is not a move away from management but instead from the inherent risks and overhead of projects.


This article is excerpted from Chapter 2 of the book An Executive’s Guide to Disciplined Agile: Winning the Race to Business Agility.


Posted on: October 16, 2019 12:00 AM | Permalink | Comments (6)

"Never hold discussions with the monkey when the organ grinder is in the room."

- Winston Churchill