We are often asked why Disciplined Agile (DA) has a team lead role instead of Scrum master or project manager. The answer is three-fold: different types of teams require different types of leaders, leadership responsibilities vary based on the type of team they are leading, and DA strives to be agnostic wherever possible. Let's explore the implications of this strategy.
Team lead is what is known as a meta role. What we mean by this is that there are different types of team lead depending on the situation, as depicted in Figure 1. Think of team lead as a place holder for a more specific type of lead. So, a scrum master is a team lead of a Scrum team, a project leader is a team lead of a project team, a sales manager is a team lead of a sales team. At times, the team will choose to stick with the name “team lead” for the role due to their way of working best fitting that description.
Figure 1. Types of team leads.
As I said above, there are three reasons for taking this approach with the team lead role:
The end result is that you may see some DA teams with a senior scrum master as the team lead, some DA teams with a project leader as a team lead, some DA teams with a functional leader in the role of team lead, and some teams with someone who is simply the team lead. Just like your way of working (WoW) should be fit for purpose, so should your approach to roles and responsibilities.
A common question that we get from customers who are new to Disciplined Agile (DA) is how do you roll up metrics from solution delivery teams into a portfolio dashboard? A more interesting question is how do you do this when the teams are working in different ways? Remember, DA teams choose their way of working (WoW) because Context Counts and Choice is Good. Even more interesting is the question “How do you roll up team metrics when you still have some traditional teams as well as some agile/lean teams?” In this blog we answer these questions one at a time, in order.
Note: We’re going to talk in terms of a single portfolio in this article, but the strategies we describe can apply at the program (a large team of teams) too, and then the program-level metrics are further rolled up to higher levels.
How Do You Roll Up Agile Team Metrics Into a Portfolio Dashboard?
Pretty much the same way you roll up metrics from traditional teams. There tends to be several potential challenges to doing this, challenges which non-agile teams also face:
How Do You Roll Up Agile Team Metrics Into a Portfolio Dashboard When the Teams Choose Their WoW, and it’s Different For Each Team?
When a team is allowed to choose it’s way of working (WoW), or “own their own process,” the team will often choose to measure itself in a manner that is appropriate to it’s WoW. This makes a lot of sense because to improve your WoW you will want to experiment with techniques, measure their effectiveness for your team within your current context, and then adopt the techniques that work best for you. So teams will need to have metrics in place that provide them with insight into how well they are working, and because each team is unique the set of metrics they collect will vary by team. For example, in Figure 1 below we see that the Data Warehouse (DW) team has decided to collect a different sent of metrics to measure stakeholder satisfaction than the Mobile Development team. The DW team needs to determine which reports are being run by their end users, and more importantly they need to identify new reports that provide valuable information to end users – this is why they have measures for Reports run (to measure usage) and NPS (to measure satisfaction). The Mobile team on the other hand needs to attract and retain users, so they measure things like session length and time in app to determine usage, and user retention and NPS to measure satisfaction.
Figure 1. Applying consistent metrics categories across disparate teams (click on it for a larger version).
Furthermore, the nature of the problem that a team faces will also motivate them to choose metrics that are appropriate for them. In Figure 1 we see that each team has a different set of quality metrics: the DW team measures data quality, the mobile team measures code quality, and the package implementation team measures user acceptance test (UAT) results. Although production incidents and automated test coverage are measured by all three teams, the remaining metrics are unique.
The point is that instead of following the consistent metrics practice across teams by insisting that each team collects the same collection of metrics, it is better to ask for consistent metric categories across teams. So instead of saying “thou shalt collect metrics X, Y, and Z” we instead say “Thou shalt collect metrics that explore Category A, Category B, and Category C.” So, as you can see in Figure 1, each team is asked to collect quality metrics, time to market metrics, and stakeholder satisfaction metrics but it is left up to them what metrics they will choose to collect. The important point is that they need to collect sufficient metrics in each category to provide insight into how well the team addresses it. This enables the teams to be flexible in their approach and collect metrics that are meaningful for them, while providing the governance people within our organization the information that they need to guide the teams effectively.
So how do you roll up the metrics when they’re not consistent across teams? Each team is responsible for taking the metrics that they collect in each category and calculating a score for that category. It is likely that a team will need to work with the governance body to develop this calculation. For example, in Figure 2 we see that the each team has a unique dashboard for their team metrics, yet at the portfolio level the metrics are rolled up into a stoplight status scorecard strategy for each category (Green = Good, Yellow = Questionable, Red = Problem). Calculating a stoplight value is one approach, you could get more sophisticated and calculate a numerical score if you like. This is something the governance body would need to decide upon and then work with teams to implement.
Figure 2. Rolling up metrics categories (click on it for a larger version).
From the looks of the Portfolio dashboard in Figure 2 there is a heat map indicating the overall status of the team (using green, yellow, and red again) and the size of the effort (indicated by the size of the circle). Anyone looking at the portfolio dashboard should be able to click on one of the circles or team stoplights and be taken to the dashboard for that specific team. The status value for the heatmap would be calculated consistently for each team based on the category statuses for that team – this is a calculation that the governance body would need to develop and then implement. The size of the effort would likely come from a financial reporting system or perhaps your people management systems.
How Do You Roll Up Team Metrics When Some Teams Are Still Traditional?
With a consistent categories approach it doesn’t really matter what paradigm the team is following. You simply allow them to collect whatever metrics are appropriate for their situation within each category and require them to develop the calculation to roll the metrics up accordingly. If they can’t come up with a reasonable calculation then the worst case would be for the Team Lead (or Project Manager in the case of a traditional team) to manually indicate/enter the status value for each category.
For the consistent categories strategy to work the governance people need to be able to look at the dashboard for a team, which will have a unique collection of widgets on it, and be able to understand what the dashboard indicates. This will require some knowledge and sophistication from our governance people, which isn’t unreasonable to ask for in our opinion. Effective leaders know that metrics only provide insight but that they shouldn’t manage by the numbers. Instead they should follow the lean concept of “gemba” and go see what is happening in the team, collaborating with them to help the team understand and overcome any challenges they may face.
In Time Tracking and Agile Software Development we overviewed why teams should consider tracking their time. Primary reasons include:
A secondary reason to track time is because the team wants to measure where they are spending time so as to target potential areas to improve. This is more of a side benefit than anything else – if this was your only reason to track time you’d be better off simply discussing these sorts of issues in your retrospectives. But if you were already tracking time then running a quick report to provide the team with intel likely makes sense for you.
So what are your options for recording time? Potential strategies, which are compared in the following table, include:
Table: Comparing options for tracking time.
This blog posting was motivated by a conversation that I had with Stacey Vetzal on Twitter.
One of the interesting trends that we’re seeing within organizations taking a disciplined agile approach to solution delivery is the preference for stable teams. Stable teams, also called stable product teams or simply long-term teams, are exactly as they sound – they remain (reasonably) stable over time, lasting longer than the life of a single project. This blog explores the differences between project teams and stable teams and then overviews the advantages and disadvantages of the stable team approach. We also explore the issue of how stable teams evolve over time.
As you can see in the following diagram, with a project team approach we say that we bring the team to the work. What we mean by that is that we first identify the need to run a project, perhaps to build the new release of an existing solution or to build the initial release of a new solution, we build a team to do the work. Once the work is done, in this case the solution is successfully released into production, the team is disbanded and its team members move on to other things.
The stable team approach is a bit different. In this case we first build an effective team then we continuously bring valuable work to the team to accomplish. In this situation the work never really ends, but instead we replenish the team’s work item list (or work item pool depending on the lifecycle being followed) regularly. The team stays together and continues to produce value for your organization over time.
Of course the term “stable team” is a bit of a misnomer as they do evolve over time. For example, many people like to stay on a team for a couple of years and then move on to another team to gain new skills and perspectives. This is good for them and good for your organization as it helps to keep your teams fresh. Sometimes you will want to grow or shrink a team. Sometimes you will discover that two people aren’t working well together and you need to split them up. The point is that there are very good reasons for your stable teams to evolve over time.
We wouldn’t be disciplined if we didn’t explore the trade-offs involved with stable teams.
The Advantages of Stable Teams
There are several advantages to stable teams:
The Disadvantages of Stable Teams
There are several disadvantages to stable teams:
Evolving Stable Teams Over Time
Stable doesn’t mean stagnant. Of course you still have basic people management issues such as people wanting to expand their skill set by working on something new by rotating to another team, people leaving the organization, and new people joining the organization. So the team itself may go on for many years even though the membership of the team evolves over time. Ideally these membership changes are not too disruptive: It’s not too bad adding a new person every month or so, or losing people at a similar rate, but gaining or losing several people in a short period of time can be painful.
Start experimenting with stable teams if you’re not already doing so. For most organizations the advantages clearly outweigh the disadvantages. In fact, you can see this in the Longevity decision point of the Form Initial Team goal diagram below.
The basic idea with rolling wave planning is that you plan things that are near in time to you in detail and things that are distant in time at a higher level. The thinking is that the longer away in time that something is the greater the chance that it will change during that time, therefore any investment in thinking through the details is likely wasted. You still want to plan at a high level to both guide your current decisions and to set people’s expectations as to what is likely to come.
Rolling wave planning is implemented in several places of the DA toolkit. First, as you can see in Figure 1 below, it is an option of the Level of Detail decision point of the Develop Initial Release Plan process goal. A rolling wave approach to release planning has the advantages of more accurate and flexible planning although can be a bit disconcerting to traditional managers who are used to annual planning strategies.
Figure 1. The Develop Initial Release Plan goal diagram.
The Portfolio Management process blade supports rolling wave budgeting as an option for its Manage the Budget decision point. This is depicted in Figure 2. The advantages are greater flexibility and greater likelihood of investing your IT funding more effectively, albeit at the loss of the false predictability provided by an annual budgeting strategy.
Figure 2. The goal diagram for the Portfolio Management process blade.
The Program Management process blade supports rolling wave planning of a program itself, as you seen in Figure 3. Planning and coordination are critical on a large program, and rolling wave planning offers the advantages greater flexibility, the ability to think important cross-team issues through, and the ability to react to changing stakeholder needs. The primary disadvantage is that it can be disconcerting for traditionalists who are used to thinking every thing through from the beginning.
Figure 3. The goal diagram for the Program Management process blade.
As you can see in Figure 4, rolling wave strategies can be applied in Product Management to evolve the business vision/roadmap. A continuous, rolling wave approach is critical to your success because the market place changes so quickly – these days, few organizations can tolerate an annual approach to business planning and in the case of companies with external customers an ad-hoc approach can prove to be too unpredictable for them.
Figure 4. The goal diagram for the Product Management process blade.
Previously we saw that rolling wave strategies can be applied to evolve your technology roadmap, as indicated in the goal diagram for Enterprise Architecture in Figure 5. The advantages of this approach are that your roadmap evolved in sync with both changes in technology and with your organization’s rate of experimentation and learning. The main disadvantage is that your technology roadmap is effectively a moving target.
Figure 5. The goal diagram for the Enterprise Architecture process blade.
As you can see, rolling wave strategies are an integral part of the Disciplined Agile (DA) toolkit. In fact, in most situations they prove to be the most effective and flexible strategies available to you. The advantages of rolling wave planning tend to greatly outweigh the disadvantages. More on this next time.