In my blog Disciplined Agile Principle: Optimize Flow I wrote the following:
Measure what counts. When it comes to measurement, context counts. What are you hoping to improve? Quality? Time to market? Staff morale? Customer satisfaction? Combinations thereof? Every person, team, and organization has their own improvement priorities, and their own ways of working, so they will have their own set of measures that they gather to provide insight into how they’re doing and more importantly how to proceed. And these measures evolve over time as their situation and priorities evolve. The implication is that your measurement strategy must be flexible and fit for purpose, and it will vary across teams.
Based on that I received the following question:
Regarding number 7 above: Measure what counts. I think that's really important. How would you handle the need for some larger organizations to compare teams, directorates, divisions etc. with uniform metrics? Each effort is so different and will require different metrics.
This is a great question that deserves a fairly detailed answer, so I thought I'd capture it as a new blog posting. Here are my thoughts.
- Using metrics to compare teams, directorates, ... is a spectacularly bad idea. The problem is that when you start using metrics to compare people or teams you motivate them to game the measures (they make the numbers appear better than they are). Once people start to game their measures your metrics program falls apart because you're now receiving inaccurate data, preventing you from making informed decisions. Even if you're not using metrics to compare people all it takes is the perception that you might be doing so and they'll start to game the measures where they can. Luckily many measures are now collected in an automated manner from the use of our tools, making them much harder to game. But you can't measure everything in an automated manner and any manually collected metric is at risk of being gamed, so sometimes you just need to accept the risks involved. BTW, using metrics to compare people/teams has been considered a bad practice in the metrics world for decades.
- Use metrics to inform decisions. I'm a firm believer in using metrics to make better-informed decisions, but that only works when the quality of the data is high. So focus your metrics program on providing you with operational intelligence to enable your staff to better manage themselves. Furthermore, metrics can and should be used to provide senior management with key information to govern effectively. This implies that you need to make at least a subset of a team's metrics visible to other areas in your organization, often rolling them up into some sort of program or portfolio dashboard. We can choose to be smart about that.
- Applying uniform metrics across disparate teams is a spectacularly bad idea. The DA principle Context Counts tells us that to be effective different teams in different situations will choose to work in different ways. To enforce the same way of working (WoW) on disparate teams is guaranteed to inject waste (hence cost and risk) into your process. Either teams will work in a manner that isn't effective for the situation that they face or they will work in an appropriate manner PLUS perform additional work to make it appear that they are following the "one official process." Agile enterprises allow, and better yet enable, teams to choose their own WoW and evolve it over time as their situation evolves (and it always does).
- You can apply common categories or improvement goals across teams. Having said all this there is always a desire to monitor teams in a reasonably consistent manner. Instead of inflicting a common set of metrics across all your teams you should instead tell them that you want them to measure a common set of issues that you hope to improve and provide a mechanism to roll up a score in a common way. For example, say your goal is to improve customer satisfaction. You could inflict a common metric across your teams, say net promoter score (NPS), and have everyone measure that. That will make sense for some teams but be misaligned or simply inappropriate for others. There are many ways to measure customer satisfaction, NPS being one of them, and each one works well in some situations and not so well in others. Recognizing that once metrics strategy won't work for all teams, negotiate with them that you want to see X% improvement in customer satisfaction over a period of Y and leave it up to them to measure appropriately. Setting goals/objectives is a fundamental concept for metrics strategies such as Goal Question Metric (GQM) and Objectives and Key Results (OKRs). If you want to roll metrics up into a higher-level dashboard or report then you can also ask the teams to have a strategy to convert their context-sensitive metric(s) into a numerical score (say 1 to 10) which can then be rolled up and compared. Just joking, comparison is still a bad idea, I'm just seeing if you're paying attention. The downside of this approach is that it requires a bit more sophistication, particularly on the part of anyone in a governance position who wants to drill down into the context specific dashboards of disparate teams. The benefit is that teams can be provided the freedom to measure what counts to them, providing them with the intelligence that they require to make better informed decisions. See the article Apply Consistent Metric Categories Across an Agile Portfolio for a detailed description of this strategy.
- Regulatory concerns may force you to collect a handful of metrics in a uniform way. It happens. This is typically true of financial metrics, at least within a given geography. For example, your organization will need to identify a consistent interpretation to how to measure CAPEX/OPEX across teams, although I've also seen that vary a bit within organizations for very good reasons.
To summarize, it's a really bad idea to inflict a common set of metrics across teams. A far better strategy is to have common improvement objectives across teams and allow them to address those objectives, and to measure their effectiveness in doing so, as they believe to be the most appropriate for their situation.