# Project Management Central

How to better understand estimations on complexity considering a comparison with time?
 Network:33
Dear PM community,

I'm looking for a shared perspective on the following:

On a Scrum framework, where estimations are done using the Fibonacci sequence to measure complexity of each task/issue, how can we develop a sense of comparison between the complexity and the time it actually takes?

It is easy to face both tasks evaluated with a 3 (low complexity task) taking 2 months to fully complete as well as others graded with a 13 that somehow are managed in one or two sprints.
Is there any way to establish a time-tracker / time comparison that helps a team to develop a better sense of commitment in the delivery and a better notion on how to estimate properly?

Also, I'm now learning more about Software Development metrics using Kanban and Scrum. Would you mind sharing your experiences?

I'm looking forward to reading from you.
Sort By:
Page: 1 2 3 next>
 Network:723
Hi Ana,

Well, the first that you must do as Scrum Master is:

1) Define with the team the two sides of the estimation values, a measure of the lowest and highest value.

For example: What's the biggest animal in the world? - A whale, and What's the smallest animal in the world? - Ants. So, in your planning poker session, the 1 will be the Ants and the 40 the whale, so if you estimate how big is a horse, now your team has a point of comparison.

2) When a team shows up its measures or cards values, the lowest and highest scores must be faced, to explain why that estimation, and then the whole team vote again.

I recommend to you maybe search about planning poker or wide-band delphi, and then chose which method you consider will work better for your team.
...
1 reply by Ana Cláudia Santos
Dec 27, 2018 5:19 AM
Ana Cláudia Santos
...
Thank you so much.

In fact we already use that method.
The difficulty, at this stage, is to actually feel that reality in Sprints corresponds to the estimation given to each point.
 Network:551
Hi Ana,

Perhaps the estimation can be improved over time. If you are working on 2 week sprints ask the team to keep as accurate actual time spent on the 1, 2, 3 Fibonnaci stories. Over time, you'll get a really good idea of planned vs. actual on the "ants" that Jesus Martheyn Berbesi describes above. Then as you feel confident on these ants, move up to the next order of complexity (5, 8,). After X number of sprints the planned vs. actual data will help you refine the estimates there and so on.

I think the key here regarding this type of estimation is in the expectation management with the project team, customer and other stakeholders. Try to explain that you're in a mode of discovery and that these estimates will be refined over time. I would be very intentional about sharing your results with EVERYONE. That way you'll be operating with integrity, and people will stay engaged with the effort. I hope that perspective helps.
...
1 reply by Ana Cláudia Santos
Dec 27, 2018 5:20 AM
Ana Cláudia Santos
...
Precisely! Thanks for your kind insight
 Network:1709
Ana Claudia, the first thing to do, is to understand that you are not estimating tasks. You are estimating features. Obviously you can manage this in other way, but if you let me I must said that this is critical to be successful. Second, if you are trying to reduce the uncertainty in your estimations then let me say that using story points is the worst thing to do mainly in the first three iteractions, And at the end, no matter the method you use to estimate, always take into account Barry Bohem´s "Cone of Uncertainty". I am writting this because I am in charge of this type of things in my actual work place.
...
Dec 26, 2018 3:28 PM
John Farlik
...
Sergio,

Good tip on the "Cone of Uncertainty". I'll look that up. Also, it is great that you have experience with this topic. I'd like to learn more--why is using story points the worst thing to do?

jtf
 Network:551
Dec 26, 2018 3:23 PM
...
Ana Claudia, the first thing to do, is to understand that you are not estimating tasks. You are estimating features. Obviously you can manage this in other way, but if you let me I must said that this is critical to be successful. Second, if you are trying to reduce the uncertainty in your estimations then let me say that using story points is the worst thing to do mainly in the first three iteractions, And at the end, no matter the method you use to estimate, always take into account Barry Bohem´s "Cone of Uncertainty". I am writting this because I am in charge of this type of things in my actual work place.
Sergio,

Good tip on the "Cone of Uncertainty". I'll look that up. Also, it is great that you have experience with this topic. I'd like to learn more--why is using story points the worst thing to do?

jtf
...
1 reply by Sergio Luis Conte
Dec 26, 2018 3:36 PM
Sergio Luis Conte
...
Thank you very much for your question. I´d like to comment that we are using story points in my actual work place. What I tried to say (english is not my first language) is this: story points is the "worst" method from the point of view of the high amount of uncertainty it has due to subjectivity. Estimation is defined as the best prediction based on current information and time to estimate then uncertainty is an inherent component inside estimations. Uncertainty is created mainly for lack of information. Cone of Uncertainty demonstrate that no matter the estimation you use the uncertainty is there and you have to take it into account when you get a number because uncertainty will define the level of accurancy. In fact, the Cone is used into other fields than software. Story Points results can be adjusted using a convination of Use Case Points/Function Points. You can find papers about that.
 Network:1709
Dec 26, 2018 3:28 PM
...
Sergio,

Good tip on the "Cone of Uncertainty". I'll look that up. Also, it is great that you have experience with this topic. I'd like to learn more--why is using story points the worst thing to do?

jtf
Thank you very much for your question. I´d like to comment that we are using story points in my actual work place. What I tried to say (english is not my first language) is this: story points is the "worst" method from the point of view of the high amount of uncertainty it has due to subjectivity. Estimation is defined as the best prediction based on current information and time to estimate then uncertainty is an inherent component inside estimations. Uncertainty is created mainly for lack of information. Cone of Uncertainty demonstrate that no matter the estimation you use the uncertainty is there and you have to take it into account when you get a number because uncertainty will define the level of accurancy. In fact, the Cone is used into other fields than software. Story Points results can be adjusted using a convination of Use Case Points/Function Points. You can find papers about that.
...
1 reply by Ana Cláudia Santos
Dec 27, 2018 5:28 AM
Ana Cláudia Santos
...
Dear Sérgio,

I truly appreciate your comment and I just added the Cone of Uncertainty to my library to explore asap.
I must agree with "story points is the "worst" method from the point of view of the high amount of uncertainty it has due to subjectivity."
I'm new to this team and this is the current setup/framework. We have proper refinement meetings where I feel the estimations are not being properly done - as well as uncertainty allows us - since we do not have a measure of comparison to the story points.

In my previous company, I worked remotely with 10 developers in India. Due to the constant hiring, we used our Refinements to use story points as well to estimate. After a few weeks/months we all together managed to compare that some things were not right because we were not estimating considering the time the feature would take to be develop but its complexity.
Is it possible somehow to connect both? Complexity + time?

With this previous team we managed to define that:
3 pt was equivalent to low complexity feature with no dependencies from other disciplines and easily managed during the sprint (the ant)
5 pt was equivalent to medium complexity feature with no dependencies from other disciplines and easily managed during the sprint
8 pt was equivalent to medium complexity feature with dependencies from other disciplines and managed during the sprint
13 pt was equivalent to high complexity feature with dependencies from other disciplines and with difficulty managed during the sprint (the whale)

Does this make sense to you?

 Network:100
I'm interested in how you define "complexity" in this situation. Is it the number of stake-holders, interfaces, or other type measure of the interacting pieces, or is it more subjective?
...
1 reply by Ana Cláudia Santos
Dec 27, 2018 5:34 AM
Ana Cláudia Santos
...
Dear Keith,

As I said, I'm new to this team.
Complexity is considered the necessary effort in time and resources to deliver a certain feature during the Sprint :) I'm open to feedback in order to make this the best way possible
 Network:33
Dec 26, 2018 12:19 PM
...
Hi Ana,

Well, the first that you must do as Scrum Master is:

1) Define with the team the two sides of the estimation values, a measure of the lowest and highest value.

For example: What's the biggest animal in the world? - A whale, and What's the smallest animal in the world? - Ants. So, in your planning poker session, the 1 will be the Ants and the 40 the whale, so if you estimate how big is a horse, now your team has a point of comparison.

2) When a team shows up its measures or cards values, the lowest and highest scores must be faced, to explain why that estimation, and then the whole team vote again.

I recommend to you maybe search about planning poker or wide-band delphi, and then chose which method you consider will work better for your team.
Thank you so much.

In fact we already use that method.
The difficulty, at this stage, is to actually feel that reality in Sprints corresponds to the estimation given to each point.
 Network:33
Dec 26, 2018 2:59 PM
...
Hi Ana,

Perhaps the estimation can be improved over time. If you are working on 2 week sprints ask the team to keep as accurate actual time spent on the 1, 2, 3 Fibonnaci stories. Over time, you'll get a really good idea of planned vs. actual on the "ants" that Jesus Martheyn Berbesi describes above. Then as you feel confident on these ants, move up to the next order of complexity (5, 8,). After X number of sprints the planned vs. actual data will help you refine the estimates there and so on.

I think the key here regarding this type of estimation is in the expectation management with the project team, customer and other stakeholders. Try to explain that you're in a mode of discovery and that these estimates will be refined over time. I would be very intentional about sharing your results with EVERYONE. That way you'll be operating with integrity, and people will stay engaged with the effort. I hope that perspective helps.
Precisely! Thanks for your kind insight
 Network:33
Dec 26, 2018 3:36 PM
...
Thank you very much for your question. I´d like to comment that we are using story points in my actual work place. What I tried to say (english is not my first language) is this: story points is the "worst" method from the point of view of the high amount of uncertainty it has due to subjectivity. Estimation is defined as the best prediction based on current information and time to estimate then uncertainty is an inherent component inside estimations. Uncertainty is created mainly for lack of information. Cone of Uncertainty demonstrate that no matter the estimation you use the uncertainty is there and you have to take it into account when you get a number because uncertainty will define the level of accurancy. In fact, the Cone is used into other fields than software. Story Points results can be adjusted using a convination of Use Case Points/Function Points. You can find papers about that.
Dear Sérgio,

I truly appreciate your comment and I just added the Cone of Uncertainty to my library to explore asap.
I must agree with "story points is the "worst" method from the point of view of the high amount of uncertainty it has due to subjectivity."
I'm new to this team and this is the current setup/framework. We have proper refinement meetings where I feel the estimations are not being properly done - as well as uncertainty allows us - since we do not have a measure of comparison to the story points.

In my previous company, I worked remotely with 10 developers in India. Due to the constant hiring, we used our Refinements to use story points as well to estimate. After a few weeks/months we all together managed to compare that some things were not right because we were not estimating considering the time the feature would take to be develop but its complexity.
Is it possible somehow to connect both? Complexity + time?

With this previous team we managed to define that:
3 pt was equivalent to low complexity feature with no dependencies from other disciplines and easily managed during the sprint (the ant)
5 pt was equivalent to medium complexity feature with no dependencies from other disciplines and easily managed during the sprint
8 pt was equivalent to medium complexity feature with dependencies from other disciplines and managed during the sprint
13 pt was equivalent to high complexity feature with dependencies from other disciplines and with difficulty managed during the sprint (the whale)

Does this make sense to you?

...
1 reply by Sergio Luis Conte
Dec 27, 2018 10:01 AM
Sergio Luis Conte
...
It is possible? Yes. It is recommendable to be successful? No. How I can understand the time it will take? Because the distribution of story points into sprints. That is the challenge. That is one of the biggest "shift of mind" when you use Agile with story points. That is the point where people with a real experience on the field take advantage. For example, because missunderstandings about estimations in general and story points estimation in particular there are movements like #NoEstimate (I have interacted a lot with them from the begining and I have demostrate them: change the name because what you are doing is estimating...)
About your points, let me say: there are a lot of ambiguity inside the criteria then in my personal opinion you still do not get low rate of uncertainty (uncertainty is risk as a component of estimations)
So, what to do?
Agile is based on knowledge. That is inside the definition of Agile when Agile was born (before software)
How to get knowledge? Some people that work with Agile say "taking it by prove and error, learning from error". That is not right but it could be a way
Using story points is one of the items (perhaps the only one) where you will assure the points given to features from knowledge and that is the reason because you need at least three sprints to get that. That is one of the points where "top management support" mean they accept the situation. That is one of the points to take into account when you decide to use agile because the company culture (and the whole architecture) has to be taken into account.
Take into account it: "a story is a placeholder to talk about". Think about all the implications when you have a user story on hand and you need estimate how much it takes to create something from it
I hope I have helped you. If not, please continue with your comments. It helps me a lot.
 Network:33
Dec 26, 2018 4:31 PM
...
I'm interested in how you define "complexity" in this situation. Is it the number of stake-holders, interfaces, or other type measure of the interacting pieces, or is it more subjective?
Dear Keith,

As I said, I'm new to this team.
Complexity is considered the necessary effort in time and resources to deliver a certain feature during the Sprint :) I'm open to feedback in order to make this the best way possible
...
Dec 27, 2018 2:49 PM
Keith Novak
...
Ana,

I asked for a very specific reason. The term "complexity" is often misused as a technical term and confused with "complication" which are two completely different things. What complexity describes is the number of things interacting. By contrast, complication is a perceived level of difficulty. For example a "housing complex" includes a number of houses making it complex, however it might not be complicated at all. One very unique house can be very complicated however by itself it is not complex.

I say that because I find that the Fibonacci sequence often does scale well based on complexity, but not necessarily on level of difficulty or how complicated a problem is. Part of why is that the level of difficulty may be very subjective if not defined in precise terms.

While that might sound like me just trying to sound smart it is a very important distinction in this situation. You are trying to find a relation where:
x = f(y) (x is a function of y) where x = the time it takes to complete and y = the "complexity" of the problem

If you are using a subjective degree of difficulty, the Fibonacci function may not actually be meaningful from a statistical correlation so your estimating ability may be hampered by using the wrong function. You might be better served by better quantifying the level of difficulty or finding a different function to estimate time based on difficulty then the Fibonacci sequence.

This is where a coefficient of correlation is useful. If X does not correlate to Y, you either have the wrong function, or you may have variability in how you define Y.
Page: 1 2 3 next>