Disciplined Agile

This blog explores pragmatic agile and lean strategies for enterprise-class contexts.

About this Blog


Recent Posts

Planning: The Efficiency Zone

Planning: When Have We Done Enough?

Planning: How Valuable is it?

Measuring What Counts When Context Counts

Disciplined Agile Principle: Context Counts

Planning: The Efficiency Zone

Categories: Planning

Yogi Berra was fond of saying “If you don’t know where you are going, you’ll end up someplace else.”  What Yogi was getting at was that it’s a good idea to do a bit of thinking, a bit of planning, before you get started.  In the previous blog, Planning: When Have you Done Enough? we explored the factors that affect how much planning we should do.  The challenge is that the factors are qualitative in nature, requiring us to make a decision that is based on intuition.  

In this blog we explore how to increase the chance that we get as close as we can to achieving the maximum value from our planning efforts.  The following figure is organized into four planning quadrants, each one of which represents a target area for our planning efforts.  

Figure – The four quadrants of planning efficiency.

Planning quadrants

Let’s explore each quadrant.  In order from most desirable, to least desirable, they are:

  1. Q1: Efficient. The most efficient approach is to do a bit less planning than is sufficient, to undershoot the mark.  The idea is that you want to get close to sufficient and be prepared to explore the aspects of your plan later in the lifecycle when you discover that it’s insufficient.  This strategy assumes you have easy access to the subject matter experts (SMEs) and decision makers, thereby enabling you to quickly adjust and evolve your plan as needed.
  2. Q2: Comfortable. Some people will aim for this quadrant out of the believe that they need to think everything through right now, that they won’t be allowed to update the plan easily later on in the lifecycle.  Although this is comfortable for people who are new to Disciplined Agile (DA) ways of working (WoW) it is also wasteful because they’ve invested too much on the planning effort.
  3. Q3: Insufficient. This is where a lot of agile purists, or non-managers who are new to agile, end up. When planning is grossly insufficient like this your team tends to work in an undirected manner that results in a lot of rework later.
  4. Q4: Wasteful. This is where a lot of traditional managers who are new to agile WoW land. This strategy is particularly problematic in areas where there is great uncertainty, in particular software development where requirements and underlying technologies change rapidly.  Planning efforts that land in quadrant 4 are often caused by impedance mismatch between the expectations of the people doing the planning, or the expectations of the people who require a detailed plan before the rest of the work commences, and the reality of the situation on the ground.  Very often the environment has changed but the planning methodology hasn’t, so lighten up.

Eisenhower said “Plans are worthless, but planning is everything.” There can be significant value in planning, but it is possible to go too far, to plan too much.  Although more research is needed in this space, it appears that the value of planning follows the law of diminishing returns – there is significant value in doing some planning, but that value quickly reaches a maximum point. Determining that maximum point is a qualitative, “gut feel” decision based on a collection of factors such as the complexity and risk of the situation, the skills and experience of the people involved, and the uncertainty that you face.  Surprisingly, the most efficient approach to planning is to aim for your plans to be slightly insufficient, to be close but in need of a bit more work when you discover that you need to work through a few more details.  I hope this blog series has been food for thought.

Posted on: November 19, 2019 06:39 AM | Permalink | Comments (13)

Planning: When Have We Done Enough?

Categories: Planning

In the previous blog we examined what the value of planning is, so that we can then determine how much planning we need to do.  The answer to this question was “it depends,” and the implication was that we need to do just enough planning for the situation that we face and no more.  In this blog we go deeper to explore what this depends on in practice.

We learned in the previous blog that our planning efforts should be sufficient, what we referred to as just barely good enough (JBGE) in Agile Modeling, for the situation that we face.  The following figure depicts the contextual factors that we should consider, with the factors motivating us to do more planning on the left-hand side under the red arrow and the factors enabling us to do less planning on the right-hand side under the green arrow.  These factors are mostly qualitative in nature, implying that it requires a judgement call on the part of the people involved with the planning effort to determine whether they’ve planned sufficiently. Let’s explore each of these factors in more detail.

Figure. The factors to determine whether you’ve planned sufficiently.

There are four factors that motivate us to increase the amount of planning we do:

  1. Risk. The greater the risk that we face the more we will want to think before we act.  For example, transporting perishable medical supplies from Toronto to Timbuktu requires more logistical planning than transporting a box of Choose Your WoW! books from Toronto to Philadelphia.
  2. Domain complexity.  The more complex the problem that we’re trying to solve, the more we want to invest in exploring the domain and then thinking through how we will address the complexity.  For example, building a bridge across a river is more complex than building a dog house, so we will invest more time thinking through the building of the bridge.
  3. Solution complexity. The greater the complexity of the solution that we are building, or configuring in some cases, the more thinking we will need to do.  For example, installing an ERP system into an existing organization to replace dozens of legacy systems requires more planning than installing a new app on your phone.
  4. Desire for “predictability.” This can be a common cultural challenge within organizations, in particular the desire to be told up front how much a project will cost or how long a project will take.  These are fairly reasonable requests when the endeavor is something our team has experience in doing, such as a house builder being asked to build a new house. They are unreasonable requests when the team is being tasked with doing something that is ill-defined or whose scope will change throughout the endeavor, the situation typically faced by software development teams.  Far too many times I’ve seen teams run aground due to an unrealistic request for predictability – the teams will overly invest in up-front modeling and planning, will then make promises regarding cost and schedule that either they’re unable to keep or can only keep by cutting scope or quality of the delivered solution.

There are six factors that enable us to reduce the amount of planning that we need to do:

  1. Skill.  The greater the skill of the people doing the work, the less planning will be required for that work. 
  2. Experience.  The greater the experience of the people doing the work, the less planning will be required for that work.
  3. Ease of change. The easier it is to change the work products being produced, including the solution itself, the less planning is required.  For example, we are developing a website it is relatively easy to update and then redeploy web pages if we discover that they need to evolve.  So we can get away with minimal planning.  Conversely, if we are building a bridge spanning a river it is relatively difficult and expensive to rebuild a supporting pillar in the middle of the river if we discover that it was placed in the wrong spot. In this case we face greater deployment risk so we need to invest in greater planning to increase the chance of getting things right.
  4. Access to stakeholders.  The easier it is to access stakeholders, in particular decision makers who can provide direction and feedback to us, in general the less initial planning we need to do.  Being able to work closely with stakeholders enables us to think through the details when it’s most appropriate during a project, not just up front.
  5. Communication and collaboration. The greater the communication and collaboration within a team, perhaps because we’re co-located in a single room or because we are using sophisticated communication software, the less planning is required due to the increased opportunities for streamlined coordination. 
  6. Uncertainty. The more likely that something will change the less planning you should do because when it does change your planning efforts around it will have been for naught. 

To summarize, the answer to “when have we planned sufficiently?” is “it depends.”  In this blog we explored several factors that motivate you to increase the amount of planning we need to do and several factors that enable us to reduce the amount of planning.  In effect we went beyond the typical consultant answer of “it depends” to the more robust answer of “it depends on this.”  

In the next blog in this 3-part series we explore how to be efficient in our planning efforts. I suspect the answer it won’t be what you’re expecting.

Posted on: November 04, 2019 09:42 AM | Permalink | Comments (12)

Planning: How Valuable is it?

Categories: Planning

Winston Churchill once said “Plans are of little importance, but planning is essential.” What Churchill meant was that the value is in thinking something through, in planning it, before you do it. So this leads to some interesting questions: How much planning should we do?  How can we get the most value out of our planning efforts? I wish I could tell you that there is solid research evidence to answer this question but sadly I haven’t been able to find any (if you know of any, I’d love to hear about it). Luckily though, we do have a lot of experience and observational evidence to fall back on.

From an accounting point of view we know that value is calculated as benefit minus cost. The implication is that all we need to do then is calculate the benefits of planning, and the costs of doing so, apply a bit of math and there you go.  How hard could that be? As we know it’s very difficult to calculate the benefits because some are qualitative, requiring a bit of creativity to turn them into a monetary figure. The bigger challenge is that planning is just one activity of many that go into achieving a benefit making it difficult to tease out the planning portion of the benefit earned. Calculating the true costs of planning isn’t much easier when you start to consider the downstream implications of the work required to gather the inputs that go into the planning process.  This is a particular problem in creative domains such as software development, more in this in future blogs.  The implication is that in practice there isn’t an easy way to determine the actual value of planning, which explains the dearth of research evidence around this issue.  

Because there are no hard and fast rules to determine the value of planning, practitioners rely on belief systems to guide their thinking.  Figure 1 overviews two common belief systems, the traditional (sometimes inappropriately called “predictive”) belief system and the agile belief system.  We’ve labelled the latter as undisciplined because we will distinguish this from a Disciplined Agile (DA) strategy later.  The traditional belief system tells us that the more effort that you put into planning the more value that you will gain from doing so.  This belief system tells you to think things through in detail before you do them.  This increases value by avoiding mistakes in the future, thereby reducing costs. At the other extreme the (undisciplined) agile belief system tells us that there is very little value in planning, that you are better advised in focusing on being able to react to feedback.  This enables you to deliver sooner and thereby earn the actual benefits sooner and longer, thereby increasing value.  The obvious downside of course is that mistakes will be made, sometimes serious ones, thereby increasing both costs and risks. 

Figure 1. What our belief systems tell us about the value of planning.

The methodologist in me tells me that extremes are generally a bad thing, that for most situations the answer lies somewhere in between.  The easy answer would be to simply draw a dashed line in between the two curves in Figure 1, or get really fancy and introduce some sort of range between the two lines, but the easy answer is wrong.   What we really need to do is look at what actually happens in practice, something that we did in the Agile Modeling (AM) community in the early 2000s.

In AM we needed to determine the value of modeling so as to provide coherent advice around when to model and to what extent to model. To make a long story short, we observed that the value of modeling followed the law of diminishing returns as you can see in Figure 2.  A little bit of modeling offered a lot of value.  So did a bit more, then a bit more, then a bit more.  But very quickly we reach a point of diminishing returns, the total costs of modeling soon exceeds the total benefits of modeling – once a model is sufficient for the situation that you’re thinking through with it, you reach a point where any more modeling removes overall value.  This point is what we called the just barely good enough (JBGE) point, although others prefer “sufficient” as a term.  So why am I talking about the value of modeling? Because modeling and planning are slightly different flavors of the same thing: Thinking something through before you jumping into doing it.

Figure 2. The value of planning/modeling.


JBGE?  What?

Many people, particularly those who follow a traditional “predictive” lifecycle approach, tend to be taken aback when they’re told the most effective plans are those that are just barely good enough.  The problem is that they often interpret JBGE as insufficient, yet by definition that is clearly not the case.  If your planning efforts haven’t been sufficient then you can still benefit from more planning, or to be accurate you can benefit from good planning at the right time.  But, if your planning efforts have been sufficient, or more than sufficient, then more planning is only going to remove value.  From a lean point of view over-planning is a waste.  

To summarize, the DA belief system is that the value of planning and modeling follows the law of diminishing returns. There is significant anecdotal evidence that bears this out but as I indicated earlier the research evidence is sparse.  I’ve actively prodded researchers along when I could for the past 15 years to get this evidence, but these efforts have always floundered when they discovered that this research is very difficult long-term work.  

In the next blog in this 3-part series we explore how to determine when your planning efforts are sufficient.  Following that we work through how to be efficient at planning.


Posted on: October 24, 2019 11:59 PM | Permalink | Comments (15)

Measuring What Counts When Context Counts

Categories: GQM, Metrics, OKRs

In my blog Disciplined Agile Principle: Optimize Flow I wrote the following:

Measure what counts. When it comes to measurement, context counts. What are you hoping to improve? Quality? Time to market? Staff morale? Customer satisfaction? Combinations thereof? Every person, team, and organization has their own improvement priorities, and their own ways of working, so they will have their own set of measures that they gather to provide insight into how they’re doing and more importantly how to proceed. And these measures evolve over time as their situation and priorities evolve. The implication is that your measurement strategy must be flexible and fit for purpose, and it will vary across teams.

Based on that I received the following question:

Regarding number 7 above: Measure what counts. I think that's really important. How would you handle the need for some larger organizations to compare teams, directorates, divisions etc. with uniform metrics? Each effort is so different and will require different metrics.

This is a great question that deserves a fairly detailed answer, so I thought I'd capture it as a new blog posting.  Here are my thoughts.

  1. Using metrics to compare teams, directorates, ... is a spectacularly bad idea. The problem is that when you start using metrics to compare people or teams you motivate them to game the measures (they make the numbers appear better than they are).  Once people start to game their measures your metrics program falls apart because you're now receiving inaccurate data, preventing you from making informed decisions.  Even if you're not using metrics to compare people all it takes is the perception that you might be doing so and they'll start to game the measures where they can.  Luckily many measures are now collected in an automated manner from the use of our tools, making them much harder to game.  But you can't measure everything in an automated manner and any manually collected metric is at risk of being gamed, so sometimes you just need to accept the risks involved. BTW, using metrics to compare people/teams has been considered a bad practice in the metrics world for decades.   
  2. Use metrics to inform decisions.  I'm a firm believer in using metrics to make better-informed decisions, but that only works when the quality of the data is high.  So focus your metrics program on providing you with operational intelligence to enable your staff to better manage themselves.  Furthermore, metrics can and should be used to provide senior management with key information to govern effectively.  This implies that you need to make at least a subset of a team's metrics visible to other areas in your organization, often rolling them up into some sort of program or portfolio dashboard. We can choose to be smart about that.
  3. Applying uniform metrics across disparate teams is a spectacularly bad idea. The DA principle Context Counts tells us that to be effective different teams in different situations will choose to work in different ways. To enforce the same way of working (WoW) on disparate teams is guaranteed to inject waste (hence cost and risk) into your process.  Either teams will work in a manner that isn't effective for the situation that they face or they will work in an appropriate manner PLUS perform additional work to make it appear that they are following the "one official process."  Agile enterprises allow, and better yet enable,  teams to choose their own WoW and evolve it over time as their situation evolves (and it always does).  
  4. You can apply common categories or improvement goals across teams.  Having said all this there is always a desire to monitor teams in a reasonably consistent manner.  Instead of inflicting a common set of metrics across all your teams you should instead tell them that you want them to measure a common set of issues that you hope to improve and provide a mechanism to roll up a score in a common way.  For example, say your goal is to improve customer satisfaction. You could inflict a common metric across your teams, say net promoter score (NPS), and have everyone measure that.  That will make sense for some teams but be misaligned or simply inappropriate for others. There are many ways to measure customer satisfaction, NPS being one of them, and each one works well in some situations and not so well in others. Recognizing that once metrics strategy won't work for all teams, negotiate with them that you want to see X% improvement in customer satisfaction over a period of Y and leave it up to them to measure appropriately.  Setting goals/objectives is a fundamental concept for metrics strategies such as Goal Question Metric (GQM) and Objectives and Key Results (OKRs).   If you want to roll metrics up into a higher-level dashboard or report then you can also ask the teams to have a strategy to convert their context-sensitive metric(s) into a numerical score (say 1 to 10) which can then be rolled up and compared.  Just joking, comparison is still a bad idea, I'm just seeing if you're paying attention.  The downside of this approach is that it requires a bit more sophistication, particularly on the part of anyone in a governance position who wants to drill down into the context specific dashboards of disparate teams. The benefit is that teams can be provided the freedom to measure what counts to them, providing them with the intelligence that they require to make better informed decisions. See the article Apply Consistent Metric Categories Across an Agile Portfolio for a detailed description of this strategy. 
  5. Regulatory concerns may force you to collect a handful of metrics in a uniform way. It happens.  This is typically true of financial metrics, at least within a given geography.  For example, your organization will need to identify a consistent interpretation to how to measure CAPEX/OPEX across teams, although I've also seen that vary a bit within organizations for very good reasons.

To summarize, it's a really bad idea to inflict a common set of metrics across teams. A far better strategy is to have common improvement objectives across teams and allow them to address those objectives, and to measure their effectiveness in doing so, as they believe to be the most appropriate for their situation.


Posted on: October 22, 2019 11:59 PM | Permalink | Comments (5)

Disciplined Agile Principle: Context Counts

Categories: Fundamentals, Principle

One of the seven principles behind Disciplined Agile (DA) is Context Counts. Every person is unique, with their own set of skills, preferences for workstyle, career goals, and learning styles. Every team is unique not only because it is composed of unique people but also because it faces a unique situation. Your organization is also unique, even when there are other organizations that operate in the same marketplace that you do. For example, automobile manufacturers such as Ford, Audi, and Tesla all build the same category of product yet it isn’t much of a stretch to claim that they are very different companies. These observations – that people, teams, and organizations are all unique – leads us to a critical idea that your process and organization structure must be tailored for the situation that you currently face. In other words, context counts.


Figure 1 overviews the potential factors that you should consider regarding the context of the situation faced by your team.  We’ve organized them into two categories:

  1. Selection factors that drive your initial choices around your high-level way of working (WoW) and in particular your choice of initial lifecycle.
  2. Scaling factors that drive detailed decisions around your team’s WoW.

Of course it’s never this straightforward.  Selection factors will have an effect on your detailed WoW choices and scaling factors will also have an impact on your initial decisions.  Our point is that in general the selection factors have a bigger impact on the initial choices than do the scaling factors and similarly the scaling factors have a bigger impact on your detailed tailoring decisions than do the selection factors.

Figure 1. Potential context factors (click to enlarge).

Context factors are interdependent.  Figure 2 shows the major relationships between the context factors.  For example, you can see that:

  • As domain complexity rises the skills required to address that complexity also rise (harder problems require greater skill to solve).
  • As team member skills increase the size of the team required to address the problem it faces can shrink (a small team of skilled people can do the job of a larger team of lower-skilled people).
  • Your organizational culture and your team culture tend to affect one another, hopefully positively.
  • Your team culture will vary by organization distribution (your team will have a different culture than that of teams in a different division of your organization, or of teams in a different company).
  • The more organizationally distributed your team becomes the greater the chance that it will be geographically distributed as well.  For example, if you are outsourcing some of the work to another organization the people doing that work may be in another, lower-cost country.

Figure 2. Relationships between context factors (click to enlarge).


Let’s explore the scaling factors a bit. As we mentioned earlier, the scaling factors tend to drive your detailed decisions around your way of working (WoW). For example, a team of eight people working in a common team room on a very complex domain problem in a life-critical regulatory situation will organize themselves differently, and will choose to follow different practices, than a team of fifty people spread out across a corporate campus on a complex problem in a non-regulatory situation. Although these two teams could be working for the same company they could choose to work in very different ways.

Figure 3 depicts the scaling factors as a radar chart, sometimes called a spider chart. There are several interesting implications:

  • The further out you go on each spoke the greater the risk faced by a team. For example, it’s much riskier to outsource than it is to build your own internal team. A large team is a much riskier proposition than a small team. A life-critical regulatory situation is much riskier than a financial-critical situation, which in turn is riskier than facing no regulations at all.
  • Because teams in different situations will need to choose to work in a manner that is appropriate for the situation that they face, to help them tailor their approach effectively you need to give them choices.
  • Anyone interacting with multiple teams needs to be flexible enough to work with each of those teams appropriately. For example, you will govern that small, co-located, life-critical team differently than the medium-sized team spread across the campus. Similarly, an Enterprise Architect who is supporting both teams will collaborate appropriately with each.

Figure 3. Tactical scaling factors faced by teams.


The leading agile method Scrum provides solid guidance for delivering value in an agile manner but it is officially described by only a sixteen page guide. Disciplined Agile recognizes that enterprise complexities require far more guidance and thus provides a comprehensive reference framework for adapting your agile approach for your unique context in a straightforward manner.  Being able to adapt your approach for your context with a variety of choices (such as those we provide via goal diagrams) rather than standardizing on one method or framework is a very good thing.


This article is excerpted from Chapter 2 of the book An Executive’s Guide to Disciplined Agile: Winning the Race to Business Agility.


Posted on: October 18, 2019 12:00 AM | Permalink | Comments (6)

"People are always blaming their circumstances for what they are. I don't believe in circumstances. The people who get on in the world are the people who get up and look for the circumstances they want and, if they can't find them, make them."

- George Bernard Shaw