A common question that we get from customers who are new to Disciplined Agile (DA) is how do you roll up metrics from solution delivery teams into a portfolio dashboard? A more interesting question is how do you do this when the teams are working in different ways? Remember, DA teams choose their way of working (WoW) because Context Counts and Choice is Good. Even more interesting is the question “How do you roll up team metrics when you still have some traditional teams as well as some agile/lean teams?” In this blog we answer these questions one at a time, in order.
Note: We’re going to talk in terms of a single portfolio in this article, but the strategies we describe can apply at the program (a large team of teams) too, and then the program-level metrics are further rolled up to higher levels.
How Do You Roll Up Agile Team Metrics Into a Portfolio Dashboard?
Pretty much the same way you roll up metrics from traditional teams. There tends to be several potential challenges to doing this, challenges which non-agile teams also face:
How Do You Roll Up Agile Team Metrics Into a Portfolio Dashboard When the Teams Choose Their WoW, and it’s Different For Each Team?
When a team is allowed to choose it’s way of working (WoW), or “own their own process,” the team will often choose to measure itself in a manner that is appropriate to it’s WoW. This makes a lot of sense because to improve your WoW you will want to experiment with techniques, measure their effectiveness for your team within your current context, and then adopt the techniques that work best for you. So teams will need to have metrics in place that provide them with insight into how well they are working, and because each team is unique the set of metrics they collect will vary by team. For example, in Figure 1 below we see that the Data Warehouse (DW) team has decided to collect a different sent of metrics to measure stakeholder satisfaction than the Mobile Development team. The DW team needs to determine which reports are being run by their end users, and more importantly they need to identify new reports that provide valuable information to end users – this is why they have measures for Reports run (to measure usage) and NPS (to measure satisfaction). The Mobile team on the other hand needs to attract and retain users, so they measure things like session length and time in app to determine usage, and user retention and NPS to measure satisfaction.
Figure 1. Applying consistent metrics categories across disparate teams (click on it for a larger version).
Furthermore, the nature of the problem that a team faces will also motivate them to choose metrics that are appropriate for them. In Figure 1 we see that each team has a different set of quality metrics: the DW team measures data quality, the mobile team measures code quality, and the package implementation team measures user acceptance test (UAT) results. Although production incidents and automated test coverage are measured by all three teams, the remaining metrics are unique.
The point is that instead of following the consistent metrics practice across teams by insisting that each team collects the same collection of metrics, it is better to ask for consistent metric categories across teams. So instead of saying “thou shalt collect metrics X, Y, and Z” we instead say “Thou shalt collect metrics that explore Category A, Category B, and Category C.” So, as you can see in Figure 1, each team is asked to collect quality metrics, time to market metrics, and stakeholder satisfaction metrics but it is left up to them what metrics they will choose to collect. The important point is that they need to collect sufficient metrics in each category to provide insight into how well the team addresses it. This enables the teams to be flexible in their approach and collect metrics that are meaningful for them, while providing the governance people within our organization the information that they need to guide the teams effectively.
So how do you roll up the metrics when they’re not consistent across teams? Each team is responsible for taking the metrics that they collect in each category and calculating a score for that category. It is likely that a team will need to work with the governance body to develop this calculation. For example, in Figure 2 we see that the each team has a unique dashboard for their team metrics, yet at the portfolio level the metrics are rolled up into a stoplight status scorecard strategy for each category (Green = Good, Yellow = Questionable, Red = Problem). Calculating a stoplight value is one approach, you could get more sophisticated and calculate a numerical score if you like. This is something the governance body would need to decide upon and then work with teams to implement.
Figure 2. Rolling up metrics categories (click on it for a larger version).
From the looks of the Portfolio dashboard in Figure 2 there is a heat map indicating the overall status of the team (using green, yellow, and red again) and the size of the effort (indicated by the size of the circle). Anyone looking at the portfolio dashboard should be able to click on one of the circles or team stoplights and be taken to the dashboard for that specific team. The status value for the heatmap would be calculated consistently for each team based on the category statuses for that team – this is a calculation that the governance body would need to develop and then implement. The size of the effort would likely come from a financial reporting system or perhaps your people management systems.
How Do You Roll Up Team Metrics When Some Teams Are Still Traditional?
With a consistent categories approach it doesn’t really matter what paradigm the team is following. You simply allow them to collect whatever metrics are appropriate for their situation within each category and require them to develop the calculation to roll the metrics up accordingly. If they can’t come up with a reasonable calculation then the worst case would be for the Team Lead (or Project Manager in the case of a traditional team) to manually indicate/enter the status value for each category.
For the consistent categories strategy to work the governance people need to be able to look at the dashboard for a team, which will have a unique collection of widgets on it, and be able to understand what the dashboard indicates. This will require some knowledge and sophistication from our governance people, which isn’t unreasonable to ask for in our opinion. Effective leaders know that metrics only provide insight but that they shouldn’t manage by the numbers. Instead they should follow the lean concept of “gemba” and go see what is happening in the team, collaborating with them to help the team understand and overcome any challenges they may face.
In Time Tracking and Agile Software Development we overviewed why teams should consider tracking their time. Primary reasons include:
A secondary reason to track time is because the team wants to measure where they are spending time so as to target potential areas to improve. This is more of a side benefit than anything else – if this was your only reason to track time you’d be better off simply discussing these sorts of issues in your retrospectives. But if you were already tracking time then running a quick report to provide the team with intel likely makes sense for you.
So what are your options for recording time? Potential strategies, which are compared in the following table, include:
Table: Comparing options for tracking time.
This blog posting was motivated by a conversation that I had with Stacey Vetzal on Twitter.
A common question that customers ask us is how do you measure productivity on agile teams. If you can easily measure productivity you can easily identify what is working for you in given situations, or what is not working for you, and adjust accordingly. One way to do so is to look at acceleration, which is the change in velocity.
A common metric captured by agile teams is their velocity. Velocity is an agile measure of how much work a team can do during a given iteration. Velocity is typically measured using an arbitrary point system that is unique to a given team or program. For example, my team might estimate that a given work item is two points worth of effort whereas your team might think that it’s seven points of effort, the important thing is that it’s consistent. So if there is another work item requiring similar effort, my team should estimate that it’s two points and your team seven points. With a consistent point system in place, each team can accurately estimate the amount of work that they can do in the current iteration by assuming that they can achieve the same amount of work as last iteration (an XP concept called “yesterday’s weather”). So, if my team delivered 27 points of functionality last iteration we would reasonably assume that all things being equal we can do the same this iteration.
It generally isn’t possible to use velocity as a measure of productivity. You can’t compare the velocity of the two teams because they’re measuring in different units. For example, we have two teams, A and B, each of 5 people and each working on a web site and each having two-week long iterations. Team A reports a velocity of 17 points for their current iteration and team B a velocity of 51 points. They’re both comprised of 5 people, therefore team B must be three times (51/17) as productive as team A. No! Team A is reporting in their point system and B in their point system, so you can’t compare them directly. The traditional strategy, one that is also suggested in the Scaled Agile Framework (SAFe), would be to ask the teams to use the same unit of points. This might be a viable strategy with a small number of teams although with five or more teams it would likely require more effort than it was worth. Regardless of the number of teams that you have it would minimally require some coordination to normalize the units and perhaps even some training and development and support of velocity calculation guidelines. Sounds like unnecessary bureaucracy that I would prefer to avoid. Worse yet, so-called “consistent” measurements such as FPs are anything but consistent because there’s always some sort of fudge factor involved in the calculation process that will vary by individual estimator.
An easier solution exists. Instead of comparing velocities you instead calculate the acceleration of each team. Acceleration is the change in velocity over time. The exact formula for acceleration is:
(NewVelocity – InitialVelocity)/InitialVelocity
For example, consider the reported velocities of each team shown in the table below. Team A’s velocity is increasing over time whereas team B’s velocity is trending downwards. All things being equal, you can assume that team A’s productivity is increasing whereas B’s is decreasing. At the end of iteration 10, if we wanted to calculate the acceleration since the previous iteration (#9), it would be (23-22)/22= 4.5% for Team A and (40-41)/41 = -2.4% for Team B. So Team A improved their productivity 4.5% during iteration 10 and Team decreased their productivity 2.4% that iteration. A better way to calculate acceleration is to look at the difference in velocity between multiple iterations as this will help to smooth out the numbers over time because as you see in the table the velocity will fluctuate naturally over time (something scientists might refer to as noise). Let’s calculate acceleration over 5 iterations instead of just one, in this case comparing the differences in velocity from iteration 6 to iteration 10. For Team A the acceleration would be (23-20)/20 = 15% and for Team B (40-45)/45 = -11.1% during the 5 iteration period, or 3% and -3.7% respectively on average per iteration.
Normalizing Acceleration for Team Size
The calculations that we performed above assumed that everything on the two teams remained the same. That assumption is likely a bit naïve. It could very well be that people joined or left either of those teams, something that would clearly impact the team’s velocity and therefore it’s acceleration. Let’s work through an example. We’ve expanded the first table to include the size of the team each iteration. We’ve also added a column showing the average velocity per person per iteration for each team, calculated by dividing the velocity by the team size for that iteration. Taking the effect of team size into account, the average acceleration between the last five iterations for Team A is (1.9-1.8)/1.8/5 = 1.1% and for Team B is (5-5)/5/5 = 0.
Similarly, perhaps there was a holiday during one iteration. When there are ten working days per iteration and you lose one or more of them due to holidays it can have a substantial impact on velocity as well. As a result you may want to take into account the number of working days each iteration in your calculation. You would effectively calculate average acceleration per person per day in this case. Frankly I’m not too worried about that issue as it would affect everyone within your organization in pretty much the same way, and it’s easy to understand why there was a “blip” in the data for that iteration.
What Does Acceleration Tell You?
For how you use acceleration in practice, there are three scenarios to consider:
Of course it’s not wise to govern simply by the numbers, so instead of assuming what is going on we would rather go and talk with the people on the two teams. Doing so you might find out that team A has adopted quality-oriented practices such as continuous integration and static code analysis which team B has not, indicating that you might want to help team A share their learnings with other teams.
This is fairly straightforward to do. For example, assume your acceleration is 0.7%, that there are five people on the team, your annual burdened cost per person is $150,000 (your senior management staff should be able to tell you what this number is), and that you have two week iterations. So, per iteration the average burdened cost per person must be $150,000/26 = $5,770. Productivity improvement per iteration for this team must be $5,770 * 5 * .007 = $202. If the acceleration stayed constant at 0.7% the overall productivity improvement for the year would be (1.007)^26 (assuming the team works all 52 weeks of the year) which would be 1.198 or 19.8%. This would be a savings of $148,500 (pretty much the equivalent of one new person).
Another approach is calculate the acceleration for the year by comparing the velocity from the beginning of the year to the end of the year (note that you want to avoid comparing iterations near any major holidays). So, if the team velocity the first week of February 2015 was 20 points, the same team’s velocity the first week of February 2016 was 23 points, that’s an acceleration of (23-20)/20 = 15% over a one year period, for a savings of $112,500.
Advantages of the Acceleration Metric
There are several advantages to using acceleration as an indicator of productivity over traditional techniques such as FP counting:
Potential Disadvantages of Acceleration
Of course, nothing is perfect, and there are a few potential disadvantages:
We hope that you’ve found this blog post valuable.
A common question that we get all the time is how do you take a disciplined agile approach to metrics. This is a fairly straightforward question, but it has a potentially complex answer. At a very high level the answer is to keep your metrics strategy as light weight and focused as possible. One way to do this is to adopt an agile version of the Goal Question Metric (GQM) strategy. The fundamental idea behind GQM is that you first identify a goal that you would like to achieve, a set of questions whose answers are pertinent to determining how well you’re achieving that goal, and then the metric(s) that could help you to answer each question.
Consider the goal of improving time to market (reducing overall cycle time in lean parlance). The following table summarizes potential questions, and their supporting metrics, that we could ask to help us to determine how well we’re addressing that goal.
Of course, this would only be one of several goals that you likely have for a given team. I would hope that you have goals around improving quality, stakeholder satisfaction, and staff morale to say the least.
It’s important to note that a single metric can help to answer multiple questions. For example, a ranged release burndown/up chart can potentially help to answer two of the questions in the table above.
Keeping GQM Agile
Unfortunately GQM has gotten a bit of a bad name for itself in the past, often because organizations took a far too heavy approach to applying it. The technique can in fact be applied in an agile manner if you so choose. There are several things that you need to do to keep GQM agile:
Adopting an agile approach to GQM, or something similar, is only one aspect of your overall agile measurement program, and measurement is an important part of your overall governance strategy. More on both of these important topics in future blog postings.
One of the key aspects of a disciplined agile approach is to be enterprise aware. The fundamental observation is that your team is only one of many in your organization, except in the case of very small organizations, and as a result should act accordingly. This means that what your team does should reflect your organization’s overall business and technical roadmaps, that you should strive to leverage as much of the existing infrastructure as possible, that you should try to enhance the existing infrastructure, and that you should work with other teams appropriately so that your overall efforts are complimentary to one another. This is a straightforward idea conceptually, but in practice acting in an enterprise aware manner can prove more difficult than one would initially think.
Over the years we’ve been asked by several customer organizations to help them to understand how to account for the expense of agile software development. In particular, incremental delivery of solutions into production or the marketplace seem to be causing confusion with the financial people within these organizations. The details of accounting rules vary between countries, but the fundamentals are common. For public companies capital expenses (CapEx) are preferable because they can boost book value through the increase in assets (in this case a software-based solution) and increase in net income (due to lower operating expenses that year). On the other hand, operational expenses (OpEx) are accounted for in the year that they occur and thereby reduce net income which in turn reduces your organization’s taxes for that year. Furthermore, in some countries you can even get tax credits for forms of software development that are research and development (R&D) in nature. In order to get properly account for the expenses incurred by software development teams, and potentially to earn R&D tax credits, you need to keep track of the amount of work performed and the type of work performed to develop a given solution. Time tracking doesn’t have to be complex: at one customer developers spend less than five minutes a week capturing such information. The point is that the way that a software developer’s work is accounted for can have a non-trivial impact upon your organization’s financial position. This in turn implies that the need for agile developers to their track time is a fairly simple example of acting in an enterprise aware manner.
So, I thought I’d run a simple test. On LinkedIn’s Agile and Lean Software Development group I ran a simple poll to see what people thought about time tracking. It provided five options to choose from:
The poll results reveal that we have a long way to go when it comes to working in an enterprise aware manner. Of the people inputting their time more of them believed it was a waste of time than understood it to be a valuable activity. When you stop and think about it, the investment of five minutes a week to track your time could potentially save or even earn your organization many hundreds of dollars. Looking at it from a dollar per minute point of view, it could be the highest value activity many developers perform that week.
The discussion that ensued regarding the poll was truly interesting. Although there were several positive postings, and several neutral ones, many more were negative when it came to time tracking. Some comments that stood out for me included:
So what can we make of this? First, it’s clear that delivery teams need a better understanding of the bigger picture, including mundane things such as tax implications of what they’re doing. Second, it’s also clear that management needs to communicate more effectively regarding why they’re asking people to track their time. To be fair, management themselves might not be aware of the tax implications themselves so may not be making effective use of the time data they’re asking for. Third, management needs to govern more effectively. Several people were clearly concerned about how management was going to use the time data (by definition they are measures) which could be a symptom of both poor communication as well as poor governance (unfortunately many developers have experiences where measures have been used against them, a failure of governance, and no longer trust their management teams to do the right thing as a result). Fourth, some of the team-focused agile practices, such as burndown charts (or better yet ranged burndown charts) and coordination meetings may be preventing people from become enterprise aware because they believe that all of their management needs are being met by these practices. Finally, many organizations are potentially leaving money on the table by not being aware of the implications of how to expense software development.
Disciplined agilists are enterprise aware. This is important for two reasons: First, you want to optimize your organizational whole instead of sub-optimize on project-related efforts; second, you can completely miss opportunities to add real value for your organization. In the anecdote I provided it was clear that some agile developers believe that an activity such as time tracking is a waste, when that clearly doesn’t have to be the case. Worse yet, although someone brought up the issues around capitalizing software development expenses early in the conversation a group of very smart and very experienced people still missed this easy opportunity to see how they could add value to their organization. It makes me wonder if some of the agile rhetoric is getting in our way of being more effective as professionals (and, BTW, there are light-weight options for tracking time available to you).