This blog posting has been replaced with Risk-Based Milestones.
You may also be interested in How Disciplined Agile Teams Address Risk.
A common question that we get from customers who are new to Disciplined Agile (DA) is how do you aggregate, or "roll up", metrics from agile teams into a portfolio dashboard? A more interesting question is how do you do this when the teams are working in different ways? Remember, DA teams choose their way of working (WoW) because Context Counts and Choice is Good. Even more interesting is the question “How do you aggregate team metrics when you still have some traditional teams as well as some agile/lean teams?” In this blog we answer these questions one at a time, in order.
Note: We’re going to talk in terms of a single portfolio in this article, but the strategies we describe can apply at the program (a large team of teams) too, and then the program-level metrics are further rolled up to higher levels.
How Do You Aggregate Agile Team Metrics Into a Portfolio Dashboard?
Pretty much the same way you aggregate metrics from traditional teams. There tends to be several potential challenges to doing this, challenges which non-agile teams also face:
How Do You Aggregate Agile Team Metrics Into a Portfolio Dashboard When the Teams Choose Their WoW, and it’s Different For Each Team?
When a team is allowed to choose it’s way of working (WoW), or “own their own process,” the team will often choose to measure itself in a manner that is appropriate to it’s WoW. This makes a lot of sense because to improve your WoW you will want to experiment with techniques, measure their effectiveness for your team within your current context, and then adopt the techniques that work best for you. So teams will need to have metrics in place that provide them with insight into how well they are working, and because each team is unique the set of metrics they collect will vary by team. For example, in Figure 1 below we see that the Data Warehouse (DW) team has decided to collect a different sent of metrics to measure stakeholder satisfaction than the Mobile Development team. The DW team needs to determine which reports are being run by their end users, and more importantly they need to identify new reports that provide valuable information to end users – this is why they have measures for Reports run (to measure usage) and NPS (to measure satisfaction). The Mobile team on the other hand needs to attract and retain users, so they measure things like session length and time in app to determine usage, and user retention and NPS to measure satisfaction.
Figure 1. Applying consistent metrics categories across disparate teams (click on it for a larger version).
Furthermore, the nature of the problem that a team faces will also motivate them to choose metrics that are appropriate for them. In Figure 1 we see that each team has a different set of quality metrics: the DW team measures data quality, the mobile team measures code quality, and the package implementation team measures user acceptance test (UAT) results. Although production incidents and automated test coverage are measured by all three teams, the remaining metrics are unique.
The point is that instead of following the consistent metrics practice across teams by insisting that each team collects the same collection of metrics, it is better to ask for consistent metric categories across teams. So instead of saying “thou shalt collect metrics X, Y, and Z” we instead say “Thou shalt collect metrics that explore Category A, Category B, and Category C.” So, as you can see in Figure 1, each team is asked to collect quality metrics, time to market metrics, and stakeholder satisfaction metrics but it is left up to them what metrics they will choose to collect. The important point is that they need to collect sufficient metrics in each category to provide insight into how well the team addresses it. This enables the teams to be flexible in their approach and collect metrics that are meaningful for them, while providing the governance people within our organization the information that they need to guide the teams effectively.
So how do you aggregate the metrics when they’re not consistent across teams? Each team is responsible for taking the metrics that they collect in each category and calculating a score for that category. It is likely that a team will need to work with the governance body to develop this calculation. For example, in Figure 2 we see that the each team has a unique dashboard for their team metrics, yet at the portfolio level the metrics are rolled up into a stoplight status scorecard strategy for each category (Green = Good, Yellow = Questionable, Red = Problem). Calculating a stoplight value is one approach, you could get more sophisticated and calculate a numerical score if you like. This is something the governance body would need to decide upon and then work with teams to implement.
Figure 2. Rolling up metrics categories (click on it for a larger version).
From the looks of the Portfolio dashboard in Figure 2 there is a heat map indicating the overall status of the team (using green, yellow, and red again) and the size of the effort (indicated by the size of the circle). Anyone looking at the portfolio dashboard should be able to click on one of the circles or team stoplights and be taken to the dashboard for that specific team. The status value for the heatmap would be calculated consistently for each team based on the category statuses for that team – this is a calculation that the governance body would need to develop and then implement. The size of the effort would likely come from a financial reporting system or perhaps your people management systems.
How Do You Aggregate Team Metrics When Some Teams Are Still Traditional?
With a consistent categories approach it doesn’t really matter what paradigm the team is following. You simply allow them to collect whatever metrics are appropriate for their situation within each category and require them to develop the calculation to roll the metrics up accordingly. If they can’t come up with a reasonable calculation then the worst case would be for the Team Lead (or Project Manager in the case of a traditional team) to manually indicate/enter the status value for each category.
For the consistent categories strategy to work the governance people need to be able to look at the dashboard for a team, which will have a unique collection of widgets on it, and be able to understand what the dashboard indicates. This will require some knowledge and sophistication from our governance people, which isn’t unreasonable to ask for in our opinion. Effective leaders know that metrics only provide insight but that they shouldn’t manage by the numbers. Instead they should follow the lean concept of “gemba” and go see what is happening in the team, collaborating with them to help the team understand and overcome any challenges they may face.
In Time Tracking and Agile Software Development we overviewed why teams should consider tracking their time. Primary reasons include:
A secondary reason to track time is because the team wants to measure where they are spending time so as to target potential areas to improve. This is more of a side benefit than anything else – if this was your only reason to track time you’d be better off simply discussing these sorts of issues in your retrospectives. But if you were already tracking time then running a quick report to provide the team with intel likely makes sense for you.
So what are your options for recording time? Potential strategies, which are compared in the following table, include:
Table: Comparing options for tracking time.
This blog posting was motivated by a conversation that I had with Stacey Vetzal on Twitter.
The quick answer is no, that’s an incredibly bad idea.
We ran a study in February 2017, the 2017 Agile Governance survey, to explore the issues around governance of agile teams. This study found that the majority of agile teams were in fact being governed in some way, that some agile teams were being governed in an agile or lightweight manner and some agile teams in a traditional manner. See the blog Are Agile Teams Being Governed? for a summary of those results.
The study also examined the effect of governance on agile teams, exploring the perceived effect of the organization’s governance strategy on team productivity, on the quality delivered, on IT investment, and on team morale. It also explored how heavy the governance strategy was perceived to be and how well it was focused on the delivery of business value. The following figure summarizes the results of these questions.
Here are our conclusions given these results:
When we work with organizations to help them to adopt agile ways of working, we often find that they are running into several common challenges when it comes to IT governance:
For agile to succeed in your organization the way that you approach IT must evolve to enable and support it, and this includes your governance strategy. Reach out to us if you would like some help in addressing this important issue.
For the major of teams the answer is yes. We ran a survey in February 2017, the 2017 Agile Governance survey, to explore the issues around governance of agile teams. As you can see in the following diagram, 78% of respondents indicated that yes, their agile teams were in fact being governed in some manner.
We also asked people about the approach to governing agile teams that their organization followed. As you can see in the following diagram, a bit more than a third of respondents indicated that the governance strategy was lightweight or agile in nature. Roughly the same indicated that their agile teams had a more traditional approach to governance applied to them, and one quarter said their governance approach was neither helping nor hindering their teams.
Governance tends to be a swear word for many agilists and they will tell you that governance is nothing than useless bureaucracy. Sadly in many organizations this seems to be the case. In the next blog in this series we will compare the effectiveness of agile and traditional strategies for governing agile teams.