The Agile Enterprise

by
This blog will explore agility at the enterprise level, examining how agile principles can be implemented throughout the organization—and in departments other than IT.

About this Blog

RSS

Recent Posts

Benchmarking in Agile

QA and Agile

Will Scrum ever be a methodology?

Velocity, better stop using it!

Story Points, solution or problem?

Benchmarking in Agile

Categories: Metrics

Benchmarking is a hot topic in the "waterfall" world. I had the chance to work on a process improvement project with people that contributed to the ISO standards. It is a fascinanting world: the world of Function Points. Like in standard Project Management there are 2 complementary groups: the IFPUG and COSMIC. They are in a healthy competition and in my view complementary rather than opposite.

I see in many Agile forums questions on metrics and I sense the fear of good metrics. It looks like everybody wants to prove that Agile is great, that when they moved to Agile it was a smart choice and everything is delivered faster and cheaper. I know two groups that claim that using their approach projects can deliver double scope in half the time. Neither has proof that it can be done. From a Lean Six Sigma perspective it is practically impossible to have a 10-15% improvement, unless what you did before was extremely inefficient. Even in that case, if the team was that inefficient and didn't know it it is unlikely that adopting Agile the situation will change.

Teams try to compare the "velocity" using Story Points and that's how the gaming starts: splitting backlog items, changing the number of story points during the sprint or inflating them at the Sprint Planning.

I haven't heard yet of someone proving that an adaptive approach is more efficient that a planned approach. Stores like we failed the project first with "waterfall" then we delivered the scope using Agile need detail. Maybe the success of the second attempt was built on the lessons learned from the failure. I put few projects on track moving from "Scrum"  to PMBoK. It worked because the organisation and the project team were not ready for Agile and the move to Agile was an excuse for not understanding their own business. They heard  that in Agile there is no need of documentation, they can change their mind every month etc. but they didn't know that all those "benefits" come at a cost and with the risk that after 4 months you can be in the same point as when you started.

Is Benchmarking possible or relevant in Agile. I believe that the answer is yes. Agile is (or it should be) a process improvement initiative: "better ways" but we need metrics to prove that we are on the right track. If the organisation is really Agile there will be trust and the project team won't game the metrics. But that is rare, especially when you are under pressure to prove that Agile works. 

My recommendation is that organisations should baseline their delivery process before jumping into the Agil train. Metrics like the time from the request to actually working on it, from the request to production release. Organisation can also baseline the overhead required and that's where things may get complicated :).

In a traditional project, called "waterfall" the overhead is the cost of the PM, BA, testing and deployment. Can Agile build cover the lack of detailed requirements? Can the team self organise to a point where there is no need for a manager? is less or no testing done? Is the deployment process faster and cheaper?

If the answer is predominantly "no", then the organisation may not  be ready for Agile. 

Posted on: April 05, 2019 02:25 AM | Permalink | Comments (5)

QA and Agile

Categories: Metrics

QA is one of the most misunderstood concepts in Agile. It is one of the relics of the bad implementation of planned approach. First of all QC doesn't mean testing, that's QC or quality control. The evolution of the testing concept in software development is very interesting. With all due respect for the good testers that I worked with in my career, the 'tester' role was created initially for developers that were not good at coding. 

In mid 80s there was no tester in software development. The only role was Analyst programmer, sometimes with the sufix Senior or Junior indicating the degree of experience, rather than skills or salary. In late 90s, once the PC become a standard desk furniture, everybody wanted to write software. From a passionate coder that was writing software to help himself or his team this activity become a team effort an many wanted to become a "programmer". Some of those who couldn't become testers, helping the team with mundene tasks like validating that the code did what it suppose to do before the users were given the permission to use it.

From taht humble beginning testing become a significant art of the "waterfall" and the QC start acting as a guardian of the Galaxy deciding when the developers did their job and when they didn't. finding the smallest bug become a victory in itself, proving that the developers are not as good as they think and testers can add value to a product. 

Then people heard about Quality Assurance and without understanding what it really is and that it should cover the whole SDLC, re-labelled the QC as QA and the former unskilled developer "become" a QA analyst. Everything looked good in the Waterfall kingdom but in 2001 a group of developers published the Manifesto for Agile software development. There is nothing about testing or tester in that document.

Then everybody start using Scrum, a framework with one role only: Developer. No BA, no DBA, no QA/QC/Tester. That's what the real Agile revolution was. When you develop a product each member of the team must contribute to the creation of the end product. Testing is an overhead and when a "developer" said that a backlog item is "Done" then there should be no need for testing. 

What about QA in Agile. Is QA gone? Definitely not but it become continuous improvement and the responsibility of the whole team, PO and SM included. Quality assurance starts with clarity on vision, PO must have a vision of the product and everything else will flow from that. Well defined backlog items, clear definition of done, technical excellency are part of QA. In Agile  "Near enough is good enough" should be the norm, severity 2,3,4,5,6... defects should be accepted upfront by everybody. Something that is annoying but doesn't stop the users will be fixed as soon as possible, probably in the next iteration (Sprint). 

Posted on: April 05, 2019 01:06 AM | Permalink | Comments (2)

Will Scrum ever be a methodology?

Categories: History

I stumbled over the question When did scrum stop being a methodology? while working on the webinar about the role of the PMO in the Agile Enterprise. Very interesting topic because the role of the PMO is to define the project delivery processes for the organisation. My experience with Agile frameworks and PMO is that Agile was usually tolerated by the PMO, rarely understood or supported. And even when Agile become a recognised approach in the organisation the PMO will still act like it will never last.

To be honest with PMO managers, in recent times they were reduced to resource managers, hiring and firing PMs, or a reporting team that consolidates the useless weekly RAG reports in something more appealing to the executives. I had the unrealistic expectation to get support from PMO on solving the Risks and issues escalated. After all the PMO Manager had access to the layers above the project sponsor, the duty and the tools to hep with the escalation. The most support I had was a personal agreement from the PMO manager that I am right but I won't get any help.

I never considered Agile a methodology, but a collection of good practices and in relation to the concept of methodology an attribute rather than a methodology in itself.

It was reassuring to find out that even one of the co creators of Scrum never considered Scrum a methodology. 

"When Ken worked on the original paper on Scrum he was CEO of a Project Management Software company selling methodologies and that crept into the paper.

As we rolled Scrum out across the world it became clear that Scrum was a framework for inspecting and adapting to improve productivity, quality, and the work life of team members. It did not have the detailed practices in “methodologies” and was a framework for adopting additional practices that worked in various enviroments.

The Scrum Guide today calls it a framework. It is the minimal set of features that allow transparency and adaptability to drive performance that exceeds that of competitors" Jeff Sutherland

 

Posted on: April 04, 2019 07:38 PM | Permalink | Comments (6)

Velocity, better stop using it!

Categories: Metrics

I see many questions on forums related to the "optimal" velocity for a Scrum Team. Most teams use a metric that is very relative: the Story Points, therefore a 'velocity' of 23 points for a team can be worst than a 15 points for another team because there is no way that 2 teams, even in the same organisation, can have the same understanding of what a the Story Point is.

I wonder why none of the Agile trainers, coaches and consultants didn't try to define what a Story Point is. Or maybe they tried and failed. It is, in my opinion, very interesting that the person credited to invent them apologised for inventing them..

In XP story points  made sense, because everything revolved around user stories but in Scrum where the backlog contains everything that needs to be done, including technical tasks, defect fixing etc. story points become a burden.

No wonder that the new frameworks are Lean-Agile, trying to combine the Agile happiness with the efficiency of Lean, because at the end of the day we have to pay the bills. 

Posted on: April 03, 2019 03:04 AM | Permalink | Comments (16)

Story Points, solution or problem?

Categories: Metrics

recently I've seen quite a lot of post describing story points as a mandatory component of the Scrum framework. Besides the fact that the Scrum Guide doesn't mention them and in general any approach to sizing user stories I think that it is useful to highlight few points regarding Story Points :)

First, the person credited with inventing the Story Points, Ron Jeffries, clearly mentioned that they are an obfuscation of time, and although they may encompass complexity, size, risk etc at the end of the day they are a estimation of time need to complete that story. The big difference is that 3 SP doesn't mean 3 hr or days but a similar backlog item (user story in XP) that was defined by the team as a baseline for 3 SP.

Another interesting story about Story Points is that they were invented because of a manager/customer that needed a sense of control on the team's progress.

For those really interested in scope sizing I strongly recommend the Function Points method. It has many flaws but it has few excellent advantages over Story Points.

  • It's a measure of size, not time.
  • It' s consistent. It can be use to benchmark teams within the same organisation but also  between various organisations and even technologies
  • There is a lot of very good quality historical data for planned project, wrongly called "waterfall". This data can be used to measure the benefits of Agile.
  • Many metrics derived from Functon Point benchmarking can be applied to Agile: defects/FP, hrs/FP, Project/iteration size in FP etc

One of the areas of improvement in Agile is Metrics. Many teams keep talking about velocity, measured using a relative metric (SP). If the metric is relative the result will be very relative. 

Another issue with Story Points is that they can be very easily manipulated. Nothing stops splitting a 3 points story in 2 stories of 2 points or changing the original 3 in 5. Adding up at a Sprint level the difference can be +/- 1000%.

But there are also many advantages if story points are used wisely. That takes time and maturity.

  • keep the estimation, no quotes because it is an estimation rather than a sizing, within the team. Using the story point to impress the PO, SM or any other manager (PO and SM are managers) is wrong and will always lead to gaming.
  • use the estimation process as a knowledge sharing opportunity. If there are significant differences between team members then allow time for explanation
  • Use the estimation process as a risk identification opportunity. Significant differences are a clear indication that one of the team members did't understood the item
  • Use the results to measure estimation maturity and skills for the team, not for the team members. If the time needed to deliver the same item size is similar then you are doing good as a team
  • Use the results to improve the process by reducing variability. If the estimation is good then the variance between sprints should be minimal
  • Don't assign story points to defects, the user never asl\ked for defects. you better quantify the time spent on fixing defects in each sprint rather than trying to measure defects/SP. 
Posted on: March 19, 2019 04:02 AM | Permalink | Comments (5)
ADVERTISEMENTS

"I may not agree with what you say, but I will defend to the death your right to say it."

- Voltaire

ADVERTISEMENT

Sponsors