Project Management

Measuring What Counts When Context Counts

From the Disciplined Agile Applied Blog
by
This blog explores pragmatic agile and lean strategies for enterprise-class contexts.

About this Blog

RSS

Recent Posts

#NoFrameworks at Agile Middle East 2020

The Art of Guesstimation: Why Shouldn’t We Estimate?

The Art of Guesstimation: Why Should We Estimate?

The Art of Guesstimation: Estimation on Software Projects

Planning: The Efficiency Zone


Categories: GQM, Metrics, OKRs


In my blog Disciplined Agile Principle: Optimize Flow I wrote the following:

Measure what counts. When it comes to measurement, context counts. What are you hoping to improve? Quality? Time to market? Staff morale? Customer satisfaction? Combinations thereof? Every person, team, and organization has their own improvement priorities, and their own ways of working, so they will have their own set of measures that they gather to provide insight into how they’re doing and more importantly how to proceed. And these measures evolve over time as their situation and priorities evolve. The implication is that your measurement strategy must be flexible and fit for purpose, and it will vary across teams.

Based on that I received the following question:

Regarding number 7 above: Measure what counts. I think that's really important. How would you handle the need for some larger organizations to compare teams, directorates, divisions etc. with uniform metrics? Each effort is so different and will require different metrics.

This is a great question that deserves a fairly detailed answer, so I thought I'd capture it as a new blog posting.  Here are my thoughts.

  1. Using metrics to compare teams, directorates, ... is a spectacularly bad idea. The problem is that when you start using metrics to compare people or teams you motivate them to game the measures (they make the numbers appear better than they are).  Once people start to game their measures your metrics program falls apart because you're now receiving inaccurate data, preventing you from making informed decisions.  Even if you're not using metrics to compare people all it takes is the perception that you might be doing so and they'll start to game the measures where they can.  Luckily many measures are now collected in an automated manner from the use of our tools, making them much harder to game.  But you can't measure everything in an automated manner and any manually collected metric is at risk of being gamed, so sometimes you just need to accept the risks involved. BTW, using metrics to compare people/teams has been considered a bad practice in the metrics world for decades.   
  2. Use metrics to inform decisions.  I'm a firm believer in using metrics to make better-informed decisions, but that only works when the quality of the data is high.  So focus your metrics program on providing you with operational intelligence to enable your staff to better manage themselves.  Furthermore, metrics can and should be used to provide senior management with key information to govern effectively.  This implies that you need to make at least a subset of a team's metrics visible to other areas in your organization, often rolling them up into some sort of program or portfolio dashboard. We can choose to be smart about that.
  3. Applying uniform metrics across disparate teams is a spectacularly bad idea. The DA principle Context Counts tells us that to be effective different teams in different situations will choose to work in different ways. To enforce the same way of working (WoW) on disparate teams is guaranteed to inject waste (hence cost and risk) into your process.  Either teams will work in a manner that isn't effective for the situation that they face or they will work in an appropriate manner PLUS perform additional work to make it appear that they are following the "one official process."  Agile enterprises allow, and better yet enable,  teams to choose their own WoW and evolve it over time as their situation evolves (and it always does).  
  4. You can apply common categories or improvement goals across teams.  Having said all this there is always a desire to monitor teams in a reasonably consistent manner.  Instead of inflicting a common set of metrics across all your teams you should instead tell them that you want them to measure a common set of issues that you hope to improve and provide a mechanism to roll up a score in a common way.  For example, say your goal is to improve customer satisfaction. You could inflict a common metric across your teams, say net promoter score (NPS), and have everyone measure that.  That will make sense for some teams but be misaligned or simply inappropriate for others. There are many ways to measure customer satisfaction, NPS being one of them, and each one works well in some situations and not so well in others. Recognizing that once metrics strategy won't work for all teams, negotiate with them that you want to see X% improvement in customer satisfaction over a period of Y and leave it up to them to measure appropriately.  Setting goals/objectives is a fundamental concept for metrics strategies such as Goal Question Metric (GQM) and Objectives and Key Results (OKRs).   If you want to roll metrics up into a higher-level dashboard or report then you can also ask the teams to have a strategy to convert their context-sensitive metric(s) into a numerical score (say 1 to 10) which can then be rolled up and compared.  Just joking, comparison is still a bad idea, I'm just seeing if you're paying attention.  The downside of this approach is that it requires a bit more sophistication, particularly on the part of anyone in a governance position who wants to drill down into the context specific dashboards of disparate teams. The benefit is that teams can be provided the freedom to measure what counts to them, providing them with the intelligence that they require to make better informed decisions. See the article Apply Consistent Metric Categories Across an Agile Portfolio for a detailed description of this strategy. 
  5. Regulatory concerns may force you to collect a handful of metrics in a uniform way. It happens.  This is typically true of financial metrics, at least within a given geography.  For example, your organization will need to identify a consistent interpretation to how to measure CAPEX/OPEX across teams, although I've also seen that vary a bit within organizations for very good reasons.

To summarize, it's a really bad idea to inflict a common set of metrics across teams. A far better strategy is to have common improvement objectives across teams and allow them to address those objectives, and to measure their effectiveness in doing so, as they believe to be the most appropriate for their situation.

 

Posted on: October 22, 2019 11:59 PM | Permalink

Comments (5)

Please login or join to subscribe to this item
Dear scott
Interesting approach to this topic
Thanks for sharing
After reading: "A far better strategy is to have common improvement objectives across teams and allow them to address those goals, and to measure their effectiveness in doing so, as they believe to be the most appropriate for their situation" ´
In the end it is necessary to measure the performance of the company.
How do you think the contribution of different teams to this performance is measured?

Good article Scott! #3 is somewhat troubling to me as I do like to believe there are some valuable metrics which are universally applicable.

For example: features delivered/features used, lead time, stakeholder satisfaction

What team would not want to see those improving?

Luis, very good point. It is necessary to measure the performance of the company. If you're interested in measuring customer satisfaction, or quality, or whatever else is important to you then you should find ways to measure that and then roll those measures up. So you tell team X to measure these things in a way that is appropriate for them and then to score themselves on that issue. So you will likely need to define rules for scoring. You tell Y to also measure and score themselves. And so on.

The point is that because teams X, Y, Z, ... are in different situations they will measure customer satisfaction, quality, and whatever else you're interested in in different ways. There may be some overlap. Many teams may calculate NPS as their customer satisfaction measure, some may use NPS as part of their measure of customer satisfaction, and some teams may not use it at all.

There is still opportunity for teams to game the way their calculate their overall score. So you'll need to govern that. I've found that setting some straightforward rules, being will to consult to teams to get their approach in place, and then monitoring them over time to see that they don't fiddle too much with the calculation works well. If they're fiddling with the calculation a lot at the beginning that's not such a bad thing because they're homing in on an approach that works. If you find they're fiddling with it over time then that's a sign that they might be gaming the system so you need to look into what's going on.

Kiron, yes there are some very common metrics as you point out. But they're always options. For example, many teams will measure features delivered/used (used is better). But other teams strive to measure value delivered/used (used is still better) which is a significantly different thing.

Lead time is also a valuable and common metric, but then again so is cycle time (of which lead time is a component).

Stakeholder satisfaction is more of a category than a metric. NPS is one way to measure that, but there are others. The Joey metric comes to mind, where you ask people "So, how you doing?" ;-)

Defect trends is a common metric on the quality side, although there are options to that as well.

The point is that teams should be allowed to do what makes the most sense for their situation. If most, or even all of them, settle on NPS for customer satisfaction then so be it. But I've never seen that happen in practice, there's always variation when you're sophisticated enough to allow it. Context counts.

Very interesting., thanks for sharing.

Please Login/Register to leave a comment.

ADVERTISEMENTS

Sometimes the road less traveled is less traveled for a reason.

- Jerry Seinfeld

ADVERTISEMENT

Sponsors