This episode features Troy Lightfoot who is a Business Agility Coach and Consultant as well as a Professional Kanban Trainer. The interview starts with a discussion about the basic differences between Scrum and Kanban and then digs into four of the metrics recommended in the Kanban Guide. We cover WIP, Throughput, Work Item Age, and Cycle Time, talking through what each of these is, the value these metrics provide, why they are so much more valuable than simply looking at something like velocity, and what these metrics can do to help you develop a better level of predicting when work is likely to finish and how they can show you and your team ways to identify and address the things that are holding you back from delivering value for your client.
Troy also has a few ProKanban Certification classes coming up. In the back half of the interview, he explains what to expect if you sign up for a Professional Kanban 1 (PK1) Certification class or his Applying Metrics for Predictability (AMP) Certification class.
Troy’s Upcoming Classes
Links from the Podcast
What do you do when they start asking for cost per point?
This issue often arrives wrapped in requests that are pure in their intent and seem to be reasonable requests from the business…
How much are we spending each month and how many points are we delivering for that spend?
Since we are now estimating work in User Story Points, we need to be able to determine how much to charge for the work that clients are asking for. So how much does a point cost us?
We need to evaluate the change requests so we can decide which ones to move forward with and which ones to reject. We’re estimating them in User Story Points, which gives us a relative idea of risk, complexity, and effort, but not cost. We need to be able to translate points to dollars so we can understand if the value we’d receive from the change is worth the cost.
I had a student recently who was qetting requests like this from the business, so I asked Agile Coach Troy Lightfoot to join me for a podcast where we could unpack the issues that often come with the cost per point question, the pros and cons of tracking it, and some things to take into account when you formulate your response to the request.
Links from the Podcast
Trying to figure out when you will be ready to ship is incredibly challenging. Many Scrum teams track historic velocity, or story points completed in a Sprint, and then use the average number of points completed per Sprint as a way of making an educated guess as to when they could deliver when they’d expect to deliver a certain number of story points in the future. There are, however, many who feel that this approach is no better than just making a completely random guess, and there is evidence to support the value in taking a different approach.
In this episode of The Reluctant Agilist, Troy Lightfoot explains his approach to Probabilistic Forecasting, what it is, why it matters, and how it is a better way of planning than using a more traditional approach.
Books Recommended In the Podcast
Tools mentioned in the podcast
Troy Lightfoot joins Dave Prior to respond to a recurring student question “How can I track the performance of a ScrumMaster using metrics which are different form the ones I use to track the performance of the team?”
Using the LeanAgile Intelligence tool he co-authored, Troy walks Dave through a few options that can be used to collect data that could provide clarity on performance of an individual ScrumMaster.
For more information on LeanAgile Intelligence: https://www.leanagileintelligence.com/
You can follow Troy Lightfoot on Twitter at https://twitter.com/g4stroy
You can follow Dave Prior on Twitter at https://twitter.com/mrsungo
Summary: You don’t have to be a developer to use Test Driven Development and Mob Programming. Last week on Twitch Amitai Schlier & Troy Lightfoot led Dave Prior and Rachel Gertz (neither of who can program) through an exercise in remote pairing with TDD.
If you come from a PM background, you’ve probably heard developers talk about Test Driven Development and you may even get the basic idea behind it - build the test to prove something works, then build the thing that passes the test.
You may also have heard about Mob Programming - the set of practices put together by Woody Zuill that takes the idea of pairing and extends it to the whole team. In mobbing, an entire team builds everything together. They share one keyboard and rotate the person typing at timed intervals. This allows them to develop cross-functionality, to learn from each other and, basically, QA as they go.
These are both topics I’ve been interested in for awhile, but I’ve never had an opportunity arise that gave me a chance to actually try them.
But, last week I had the opportunity to participate in a unique experiment that not only let me learn more about each of these sets of practices, but gave me a
The entire experience was a blast and I’ve developed a new found appreciation for the entire though process and discipline that goes into using Test Driven Development and trying to mob with a team.
I’d encourage you to check out the video on your own, or with your team and maybe even try to replicate the experiment. I think this would work great as a team building exercise as well. Most of the time I felt like I was playing a board game with a bunch of friends.