Project Management

Disciplined Agile

by , , , , , , , , ,
This blog contains details about various aspects of PMI's Disciplined Agile (DA) tool kit, including new and upcoming topics.

About this Blog

RSS

View Posts By:

Scott Ambler
Glen Little
Mark Lines
Valentin Mocanu
Daniel Gagnon
Michael Richardson
Joshua Barnes
Kashmir Birk
Klaus Boedker
Mike Griffiths

Recent Posts

Embracing Mindset Diversity in Disciplined Agile

Disciplined Agile: An Executive's Starting Point

Using Lean Agile Procurement (LAP) in complex procurement situations

Vendor Management in the Disciplined Agile Enterprise

Asset Management: What Types of Assets Might You Manage?

An overview of the Disciplined Agile (DA) milestones

Disciplined Agile (DA) milestones

In some of the organizations that we’ve helped to adopt Disciplined Agile (DA), the senior leadership, and often middle management, are reluctant at first to allow teams to choose their way of working (WoW). The challenge is that their traditional mindset often tells them that teams need to follow the same, “repeatable process” so that senior leadership may oversee and guide them. The challenge with forcing teams to follow the same process, or even just conform to the same defined governance strategy, is that it won't mesh well with all teams and thereby inject unnecessary risk and cost into their WoW. In DA we choose to do better than that.

The DA approach is to have consistent, risk-based (not artifact-based) milestones addressed by teams, as appropriate, so as to provide leadership with visibility and collaboration points into the teams that they oversee. This approach is based on two observations:

  1. We can have common governance across teams without enforcing a common process. A fundamental enabler of this is to adopt common, risk-based (not artifact-based) milestones across the life cycles. This is exactly what the DA team-based lifecycles do. These common milestones are depicted in the figure above and the risks summarized in Table 1 below.
  2. Repeatable outcomes are far more important than repeatable processes. Our stakeholders want us to spend their IT investment wisely. They want us to produce, and evolve, solutions that meet their actual needs. They want these solutions quickly. They want solutions that enable them to compete effectively in the marketplace. These are the types of outcomes that stakeholders would like to have over and over (e.g., repeatedly), they really aren’t that concerned with the processes that we follow to do this. 
Table 1. The risks addressed by the life cycle milestones.
Milestone Risk Addressed
Stakeholder vision

Do stakeholders agree with your strategy?

Proven architecture

Can you actually implement this?

Continued viability

Does the effort still make sense?

Sufficient functionality

Has the team produced (at least) a minimum business increment (MBI)?

Production ready

Will the solution work in production and are stakeholders ready to receive it?

Delighted stakeholders

Are stakeholders happy with the deployed solution?

 

A deeper look at the milestones

Let’s explore DAD’s risk-based milestones in a bit more detail:

  • Stakeholder vision. The aim of the Inception phase is to spend a short, yet sufficient amount of time, typically a few days to a few weeks, to gain stakeholder agreement that the initiative makes sense and should continue into the Construction phase. By addressing each of the Inception goals, the delivery team will capture traditional project information related to initial scope, technology, schedule, budget, risks, and other information albeit in as simple a fashion as possible. This information is consolidated and presented to stakeholders as a vision statement as described by the Develop Common Vision process goal. The format of the vision and formality of review will vary according to your situation. A typical practice is to review a short set of slides with key stakeholders at the end of the Inception phase to ensure that everyone is on the same page with regard to the project intent and delivery approach.
  • Proven architecture. Early risk mitigation is a part of any good engineering discipline. As the Prove Architecture Early process goal indicates, there are several strategies you may choose to adopt. The most effective of which is to build an end-to-end skeleton of working code that implements technically risky business requirements. A key responsibility of the architecture owner role is to identify potential implementation risks during the Inception phase. It is expected that these risks will have been reduced or eliminated by implementing related functionality somewhere between one and three iterations into the Construction phase. As a result of applying this approach, early iteration reviews/demos often show the ability of the solution to support nonfunctional requirements in addition to, or instead of, functional requirements. For this reason, it is important that architecture-savvy stakeholders are given the opportunity to participate in these milestone reviews.
  • Continued viability. An optional milestone to include in your release schedule is related to project viability. At certain times during a project, stakeholders may request a checkpoint to ensure that the team is working toward the vision agreed to at the end of Inception. Scheduling these milestones ensures that stakeholders are aware of key dates wherein they should get together with the team to assess the project status and agree to changes if necessary. These changes could include anything such as funding levels, team makeup, scope, risk assessment, or even potentially canceling the project. There could be several of these milestones on a long-running project. However, instead of having this milestone review, the real solution is to release into production more often—actual usage, or lack thereof, will provide a very clear indication of whether your solution is viable.
  • Sufficient functionality. While it is worthwhile pursuing a goal of a consumable solution (what Scrum calls a potentially shippable increment) at the end of each iteration, it is more common to require a number of iterations of Construction before the team has implemented enough functionality to deploy. While this is sometimes referred to as a minimal viable product (MVP), this not technically accurate as classically an MVP is meant to test the viability of a product rather than an indication of minimal deployable functionality. The more accurate term to compare to this milestone would be "minimum business increment (MBI)" or “minimal marketable release (MMR),”. An MBI will comprise one or more minimal marketable features (MMFs), and an MMF provides a positive outcome to the end users of our solution. An outcome may need to be implemented via several user stories. For example, searching for an item on an e-commerce system adds no value to an end user if they cannot also add the found items to their shopping cart. The sufficient functionality milestone is reached at the end of the Construction phase when an MBI is available, plus the cost of transitioning the release to stakeholders is justified. As an example, while an increment of a consumable solution may be available with every two-week iteration, it may take several weeks to actually deploy it in a high-compliance environment, so the cost of deployment may not be justified until a greater amount of functionality is completed.
  • Production ready. Once sufficient functionality has been developed and tested, transition-related activities such as data conversions, final acceptance testing, production, and support-related documentation normally need to be completed. Ideally, much of the work has been done continuously during Construction as part of completing each increment of functionality. At some point a decision needs to be made that the solution is ready for production, which is the purpose of this milestone. The project-based life cycles include a Transition phase where the Production Ready milestone is typically implemented as a review. The two continuous delivery life cycles, on the other hand, have a fully automated transition/release activity where this milestone is addressed programmatically—typically the solution must pass automated regression testing and the automated analysis tools must determine that the solution is of sufficient quality. 
  • Delighted stakeholders. Governance bodies and other stakeholders obviously like to know when the initiative is officially over so that they can begin another release or direct funds elsewhere. The initiative doesn't end when the solution is deployed. With projects, there are often closeout activities such as training, deployment tuning, support handoffs, post-implementation reviews, or even warranty periods before the solution is considered completed. One of the principles of DA is Delight Customers which suggests that “satisfied” customers is setting the bar too low. The implication is that we need to verify whether we’ve delighted our stakeholders, typically through collection and analysis of appropriate metrics. 

 

Related Reading

 

Posted by Scott Ambler on: November 16, 2020 11:56 AM | Permalink | Comments (3)

Apply Consistent Metrics Categories Across an Agile Portfolio

Metrics 

A common question that we get from customers who are new to Disciplined Agile (DA) is how do you roll up metrics from solution delivery teams into a portfolio dashboard?  A more interesting question is how do you do this when the teams are working in different ways?  Remember, DA teams choose their way of working (WoW) because Context Counts and Choice is Good.  Even more interesting is the question “How do you roll up team metrics when you still have some traditional teams as well as some agile/lean teams?”  In this blog we answer these questions one at a time, in order.

Note: We’re going to talk in terms of a single portfolio in this article, but the strategies we describe can apply at the program (a large team of teams) too, and then the program-level metrics are further rolled up to higher levels.

 

How Do You Roll Up Agile Team Metrics Into a Portfolio Dashboard?

Pretty much the same way you roll up metrics from traditional teams.  There tends to be several potential challenges to doing this, challenges which non-agile teams also face:

  1. You can only roll-up metrics with similar units. You do need to be careful with some measures, such as team velocity, because the units vary across teams.  It is possible to enforce a common way to measure velocity across teams, but this tends to be more effort than it’s worth in practice. Sometimes there are metrics which you think you should be able to roll up but you discover (hopefully) that you really shouldn’t.  One example of this is number of defects by severity.  You can roll this metric up when the severity of a defect is determined in a consistent manner across teams, but this isn’t always the case in practice.
  2. Sometimes the math gets a bit complex.  Rolling up metrics isn’t always based on simple addition.  Many times you will need to weight metrics based in terms of time, size, financial impact or combinations thereof.  Interestingly, although some metrics can’t be rolled up because they are measured in different units, you can often roll up the trends of those metrics.  For example, acceleration is the change in velocity of a team.  Given an appropriate weighting formula you can roll up an average acceleration figure across teams.
  3. Some people believe you can only roll up the same metric. Basically, when a common metric is captured across teams (for example Net Promoter Score (NPS) or cyclomatic complexity) then you can easily (albeit with some “complex” math in some cases) roll them up to a program or portfolio level.  In the Govern Delivery Team process goal we refer to this strategy as consistent metrics, an option of the Provide Transparency decision point. This strategy works well when teams actually collect the same metrics, but in when teams choose their WoW this isn’t always the case.

 

How Do You Roll Up Agile Team Metrics Into a Portfolio Dashboard When the Teams Choose Their WoW, and it’s Different For Each Team?

When a team is allowed to choose it’s way of working (WoW), or “own their own process,” the team will often choose to measure itself in a manner that is appropriate to it’s WoW. This makes a lot of sense because to improve your WoW you will want to experiment with techniques, measure their effectiveness for your team within your current context, and then adopt the techniques that work best for you.  So teams will need to have metrics in place that provide them with insight into how well they are working, and because each team is unique the set of metrics they collect will vary by team.  For example, in Figure 1 below we see that the Data Warehouse (DW) team has decided to collect a different sent of metrics to measure stakeholder satisfaction than the Mobile Development team.  The DW team needs to determine which reports are being run by their end users, and more importantly they need to identify new reports that provide valuable information to end users – this is why they have measures for Reports run (to measure usage) and NPS (to measure satisfaction).  The Mobile team on the other hand needs to attract and retain users, so they measure things like session length and time in app to determine usage, and user retention and NPS to measure satisfaction.

Figure 1. Applying consistent metrics categories across disparate teams (click on it for a larger version).

Agile metrics categories

Furthermore, the nature of the problem that a team faces will also motivate them to choose metrics that are appropriate for them. In Figure 1 we see that each team has a different set of quality metrics: the DW team measures data quality, the mobile team measures code quality, and the package implementation team measures user acceptance test (UAT) results. Although production incidents and automated test coverage are measured by all three teams, the remaining metrics are unique.

The point is that instead of following the consistent metrics practice across teams by insisting that each team collects the same collection of metrics, it is better to ask for consistent metric categories across teams. So instead of saying “thou shalt collect metrics X, Y, and Z” we instead say “Thou shalt collect metrics that explore Category A, Category B, and Category C.” So, as you can see in Figure 1, each team is asked to collect quality metrics, time to market metrics, and stakeholder satisfaction metrics but it is left up to them what metrics they will choose to collect. The important point is that they need to collect sufficient metrics in each category to provide insight into how well the team addresses it. This enables the teams to be flexible in their approach and collect metrics that are meaningful for them, while providing the governance people within our organization the information that they need to guide the teams effectively.

So how do you roll up the metrics when they’re not consistent across teams?  Each team is responsible for taking the metrics that they collect in each category and calculating a score for that category.  It is likely that a team will need to work with the governance body to develop this calculation.  For example, in Figure 2 we see that the each team has a unique dashboard for their team metrics, yet at the portfolio level the metrics are rolled up into a stoplight status scorecard strategy for each category (Green = Good, Yellow = Questionable, Red = Problem). Calculating a stoplight value is one approach, you could get more sophisticated and calculate a numerical score if you like.  This is something the governance body would need to decide upon and then work with teams to implement.

Figure 2. Rolling up metrics categories (click on it for a larger version).

Agile metrics - portfolio dashboard

 

From the looks of the Portfolio dashboard in Figure 2 there is a heat map indicating the overall status of the team (using green, yellow, and red again) and the size of the effort (indicated by the size of the circle). Anyone looking at the portfolio dashboard should be able to click on one of the circles or team stoplights and be taken to the dashboard for that specific team. The status value for the heatmap would be calculated consistently for each team based on the category statuses for that team – this is a calculation that the governance body would need to develop and then implement.  The size of the effort would likely come from a financial reporting system or perhaps your people management systems.

 

How Do You Roll Up Team Metrics When Some Teams Are Still Traditional?

With a consistent categories approach it doesn’t really matter what paradigm the team is following.  You simply allow them to collect whatever metrics are appropriate for their situation within each category and require them to develop the calculation to roll the metrics up accordingly.  If they can’t come up with a reasonable calculation then the worst case would be for the Team Lead (or Project Manager in the case of a traditional team) to manually indicate/enter the status value for each category.

 

Parting Thoughts

For the consistent categories strategy to work the governance people need to be able to look at the dashboard for a team, which will have a unique collection of widgets on it, and be able to understand what the dashboard indicates. This will require some knowledge and sophistication from our governance people, which isn’t unreasonable to ask for in our opinion. Effective leaders know that metrics only provide insight but that they shouldn’t manage by the numbers. Instead they should follow the lean concept of “gemba” and go see what is happening in the team, collaborating with them to help the team understand and overcome any challenges they may face.

 

Related Links

Posted by Scott Ambler on: September 04, 2018 12:19 PM | Permalink | Comments (0)

Strategies for Tracking Time on Agile Teams

Time Tracking

In Time Tracking and Agile Software Development we overviewed why teams should consider tracking their time.  Primary reasons include:

  • You’re billing your customer by the hour
  • Your organization wants to account for CapEx/OpEx
  • Your organization wants to take advantage of tax credits (typically for R&D work)

A secondary reason to track time is because the team wants to measure where they are spending time so as to target potential areas to improve.  This is more of a side benefit than anything else – if this was your only reason to track time you’d be better off simply discussing these sorts of issues in your retrospectives.  But if you were already tracking time then running a quick report to provide the team with intel likely makes sense for you.

So what are your options for recording time?  Potential strategies, which are compared in the following table, include:

  1. Automated report from an agile management tool. The basic idea is to extract data from an agile management tool (JIRA, TFS, LeanKit, …) and load it into your time tracking system.
  2. Manual input by team members. Each team member, typically once a week, inputs their time into the time tracking tool.
  3. Manual input by the Team Lead. The Team Lead (ScrumMaster) inputs the time for their team into the time tracking tool.
  4. Manual input by a Project Manager/Coordinator. A PM or Project Coordinator, often in a support role to the team, inputs the time of team.
  5. Don’t track time at all. ‘Nuff said.

Table: Comparing options for tracking time.

Strategy Advantages Disadvantages
Automated report from agile management tool
  • Very efficient because it doesn’t require ongoing data input
  • Sufficient for CapEx/OpEx purposes
  • Sufficient for customer billing when the billing units are by the day (or greater)
  • Requires a bit of development work to feed data from your agile management tool into your time tracking system
  • May motivate the team to start treating the agile management tool like a time tracking tool (which often negates the value of the management tool)
  • Often requires a bit of (programmatic) fudging of the data to calculate the time not captured in the tool (such as coordination meetings, demos, retrospectives, …)
  • May require a bit of negotiation with your organization’s auditors (if any)
  • Only an option for teams using agile management tools
  • Works well for teams that are working in a fairly consistent manner (i.e. mature teams that have gelled)
Manual input by team members
  • Potentially the most accurate approach
  • Sufficient for CapEx/OpEx, tax credits, and customer billing
  • Team members often perceive this as an overhead
  • People will be motivated to input what they believe management wants, particularly if any sort of rewards or punishments are thought to be connected
  • Potential for significant expense across the organization (a few minutes per person per week starts to add up) if this gets too detailed or complicated
  • For people working on multiple teams (a question idea anyway) time tracking often becomes onerous
Manual input by Team Lead
  • Shifts the data input burden away from the team
  • Sufficient for CapEx/OpEx and tax credits
  • Likely sufficient for customer billing
  • Not as accurate as other strategies
  • Takes the Team Lead away from leadership tasks
  • Requires the Team Lead to know what is going on within the team (which frankly should be a given)
Manual input by Project Manager/Coordinator
  • Same as manual input by Team Lead
  • Not as accurate as other strategies
  • Likely requires the PM to interview/badger team members to find out what they did during the week
  • Little better than “make work” for the PM
Don’t track time at all
  • No overhead for the team
  • Your organization may be losing out on tax credits

This blog posting was motivated by a conversation that I had with Stacey Vetzal on Twitter.

Related Reading

Posted by Scott Ambler on: May 29, 2017 06:57 AM | Permalink | Comments (0)

Should You Govern Agile Teams Via a Traditional Strategy?

Categories: governance, Surveys

The quick answer is no, that’s an incredibly bad idea.

We ran a study in February 2017, the 2017 Agile Governance survey, to explore the issues around governance of agile teams. This study found that the majority of agile teams were in fact being governed in some way, that some agile teams were being governed in an agile or lightweight manner and some agile teams in a traditional manner.  See the blog Are Agile Teams Being Governed? for a summary of those results.

The study also examined the effect of governance on agile teams, exploring the perceived effect of the organization’s governance strategy on team productivity, on the quality delivered, on IT investment, and on team morale.  It also explored how heavy the governance strategy was perceived to be and how well it was focused on the delivery of business value. The following figure summarizes the results of these questions.

Governance Effectiveness with Agile Teams

Here are our conclusions given these results:

  1. Agile governance helps agile teams. There is a clear co-relation between an agile approach to governing agile teams and positive results such as improving productivity, increasing quality, spending your investment in IT wisely, and improved team morale. This is what we believe the goal to be, to help the people being governed to be more effective and successful.
  2. Traditional governance hinders agile teams.  There is a clear co-relation between traditional approaches to governing agile teams and reduced team productivity, reduced quality of output, wasting IT investment, and decreased team morale.  We believe that these results are the exact opposite of what you hope to achieve with your governance strategy.
  3. Agile teams should be governed in an agile manner.  This follows directly from the previous two conclusions.  It should come as no surprise that your governance strategy should be well-aligned with what it is being governed.
  4. Traditional governance strategies likely hinders traditional teams too.  We didn’t look into this issue directly, but our experience has been that traditional governance tends to be more of a hindrance than a help to traditional teams as well.

When we work with organizations to help them to adopt agile ways of working, we often find that they are running into several common challenges when it comes to IT governance:

  1. They have both agile teams and traditional software teams.  This is because it’s a multi-modal world: You will have some teams taking a traditional approach, some an agile approach, some take a lean approach, and some are even skilled enough for continuous delivery.  Each team will follow the lifecycle that makes the most sense for them, and as a result each team should be governed by the approach that best suits the way that they are working.  To do anything different would be to hinder the teams, and that isn’t what good governance should be about.
  2. There is a desire for a single approach to governing software teams. This makes sense on the surface because it would simplify your overall governance strategy, thereby making things easier for the people doing the governing.  But, as we’ve learned, this results in negative effects in practice.  Your governance strategy must be flexible enough to support the range of teams being governed.
  3. The governance team is struggling to understand agile.  Your executives and middle management need education and coaching in agile and lean just like the people on your software team do.  It is naive to expect your governance people to devise a governance strategy for agile when they don’t really understand the implications of agile to begin with.

For agile to succeed in your organization the way that you approach IT must evolve to enable and support it, and this includes your governance strategy.  Reach out to us if you would like some help in addressing this important issue.

Related Reading

 

Posted by Scott Ambler on: April 08, 2017 07:11 AM | Permalink | Comments (0)

Are Agile Teams Being Governed?

Categories: governance, Surveys

For the major of teams the answer is yes.  We ran a survey in February 2017, the 2017 Agile Governance survey, to explore the issues around governance of agile teams.  As you can see in the following diagram, 78% of respondents indicated that yes, their agile teams were in fact being governed in some manner.

Agile governance rates

We also asked people about the approach to governing agile teams that their organization followed.  As you can see in the following diagram, a bit more than a third of respondents indicated that the governance strategy was lightweight or agile in nature.  Roughly the same indicated that their agile teams had a more traditional approach to governance applied to them, and one quarter said their governance approach was neither helping nor hindering their teams.

How are agile teams being governed?

Governance tends to be a swear word for many agilists and they will tell you that governance is nothing than useless bureaucracy.  Sadly in many organizations this seems to be the case.    In the next blog in this series we will compare the effectiveness of agile and traditional strategies for governing agile teams.

Related Reading

Posted by Scott Ambler on: April 04, 2017 07:32 AM | Permalink | Comments (0)
ADVERTISEMENTS

If Stupidity got us into this mess, then why can't it get us out?

- Will Rogers