Project Management

Disciplined Agile

by , , , , , , ,
#ChooseYourWoW | #ContinuousImprovement | #Kaizen | #ProcessImprovement | Adoption | agile | Agile certification | agile transformation | Analogy | Architecture | architecture | book | Business Agility | Certification | Choose your WoW | CMMI | Coaching | Collaboration | Compliancy | Configuration management | Construction phase | Context | Continuous Improvement | COVID-19 | Culture | culture | DAD | DAD discussions | DAD roles | Data Management | database | DevOps | Discipline | disciplined agile delivery | Documentation | DW/BI | Enterprise Agile | Enterprise Architecture | Enterprise Awareness | Essence | Evolving DA | Experiment | Financial | GDD | Geographic Distribution | global development | Goal-Driven | goal-driven | goals | Governance | Guideline | Hybrid | Improvement | inception | Inception phase | Kanban | Large Teams | Lean | Lifecycle | lifecycle | Metrics | mindset | News | News and events | Non-Functional Requirements | non-functional requirements | Operations | Outsourcing | People | Philosophies | Planning | PMI | PMI and DA | Portfolio Management | Practices | Principle | Process | process improvement | Product Management | Product Owners | Program Management | Project Management | Promise | quality | Release Management | Requirements | requirements | Reuse Engineering | Risk management | RUP | Scaling | scaling | scaling agile | Scrum | Support | Surveys | Teams | Technical Debt | Terminology | Testing | testing | Toolkit | Transformation | Workflow | show all posts

About this Blog

RSS

View Posts By:

Scott Ambler
Glen Little
Mark Lines
Valentin Mocanu
Daniel Gagnon
Michael Richardson
Joshua Barnes
Kashmir Birk

Recent Posts

What Does it Mean to Be Awesome?

Failure Bow: Choosing Between Life Cycles Flowchart Update

Evolving Disciplined Agile: Guidelines of the DA Mindset

Evolving Disciplined Agile: Promises of the DA Mindset

Evolving Disciplined Agile: Principles of the DA Mindset

Strategies for Capturing Quality Requirements

Agile modeling

Quality requirements, also known as non-functional requirements (NFRs), quality of service (QoS) or technical requirements, address issues such as reliability, availability, security, privacy, and many other quality issues.  The following diagram, which overviews architectural views and concerns, provides a great source of quality requirement types (the list of concerns).  Good sources for quality requirements include your enterprise architects and operations staff, although any stakeholder is a potential source for them.

Figure 1. Architectural views and concerns.

Architecture Views and Concerns

Why Are Quality Requirements Important?

Stakeholders will describe quality requirements at any time, but it’s particularly important to focus on them during your initial scoping efforts during Inception as you can see in the goal diagram below for Explore Initial Scope.  Considering quality requirements early in the lifecycle is important because:

  1. Quality requirements drive important architecture decisions. When you are identifying your technical strategy you will often find that it is the NFRs that will be the primary drivers of your architecture.
  2. Quality requirements will drive some aspects of your test strategy. Because they tend to be cross-cutting, and because they tend to drive important aspects of your architecture, they tend to drive important aspects of your test strategy.  For example, security requirements will drive the need to support security testing, performance requirements will drive the need for stress and load testing, and so on. These testing needs in turn may drive aspects of your test environments and your testing tool choices.
  3. Quality requirements will drive acceptance criteria for functional requirements (such as stories).  Quality requirements are typically system-wide thus they apply to many, and sometimes all of your functional requirements.  Part of ensuring that your solution is potentially consumable each iteration is ensuring that it fulfills its overall quality goals, including applicable quality requirements.  This is particularly true with life-critical and mission-critical solutions.

Capturing Quality Requirements

Figure 2 depicts the goal diagram for Explore Scope.  As you can see, there are several strategies for exploring and potentially capturing quality requirements.

Figure 2. The goal diagram for Explore Scope (click to enlarge).

Explore Scope process goal

Let’s explore the three strategies, which can be combined, for capturing quality requirements:

  1. Technical stories.  A technical story is a documentation strategy where the quality requirement  is captured as a separate entity that is meant to be addressed in a single iteration.  Technical stories are in effect the quality requirement equivalent of a user story. For example “The system will be unavailable to end users no more than 30 seconds a week” and “Only the employee, their direct manager, and manager-level human resource people have access to salary information about said employee” are both examples of technical stories.
  2. Acceptance criteria for individual functional requirements.  Part of the strategy of ensuring that a work item is done at the end of an iteration is to verify that it meets all of its acceptance criteria.  Many of these acceptance criterions will reflect quality requirements specific to an individual usage requirement, such as “Salary information read-only accessible by the employee,”, “Salary information read-only accessible by their direct manager”, “Salary information read/write accessible by HR managers”, and “Salary information is not accessible to anyone without specific access rights”.  So in effect quality requirements are implemented because they become part of your “done” criteria.
  3. Explicit list.  Capture quality requirements separately from your work item list in a separate artifact.  This provides you with a reminder for the issues to consider when formulating acceptance criteria for your functional requirements.  In the Unified Process this artifact was called a supplementary specification.

Of course a fourth option would be to not capture quality requirements at all.  In theory this would work in very simple situations but it clearly runs a significant risk of the team building a solution that doesn’t meet the operational needs of the stakeholders.  This is often a symptom of a teams only working with a small subset of their stakeholder types (e.g. only working with end users but not operations staff, senior managers, and so on).

Related Resources

Posted by Scott Ambler on: January 23, 2018 01:17 PM | Permalink | Comments (0)

Managing Requirements Dependencies Between Agile/Lean Teams and Traditional/Waterfall Teams

agileTraditional

Like it or not, functional dependencies occur between requirements.  This can happen for many reasons, as we discussed in Managing Requirements Dependencies Between Teams, and there are several strategies for resolving such dependencies.  In this blog posting we explore what happens when a functional dependency between two requirements exists AND one requirement is implemented by an agile/lean team and another by a traditional/waterfall team.

In our example requirement X depends on requirement Y.  Neither requirement has been implemented yet (if requirement Y had already been implemented, and better yet deployed into production, the point would be moot).  When we refer to the “agile team” this team may be following any one of the lifecycles supported by DAD (Basic/Agile, Advanced/Lean, Continuous Delivery, or Exploratory/Lean Startup).

 

Scenario 1: An Agile/Lean Team Depends on a Traditional Team

In this scenario X is being implemented by an agile team and Y is being implemented by a traditional team.  From the point of view of the agile team, this is very risky for the following reasons:

  1. The traditional team is likely working on a longer time frame.  Disciplined agile teams produce a potentially consumable solution (potentially shippable software in Scrum parlance) on a regular basis, at least every few weeks.  A traditional team typically delivers a working solution over a much longer time frame, often measured in quarters.  The implication is that because Y is being developed by a traditional team it may be many months until it is available, compared to several weeks if it was being developed by an agile team.  This potentially adds schedule risk to the agile team.
  2. The traditional team may not make their deadline.  According to the Standish Group’s Chaos Report, the average traditional team comes it at almost twice their original estimate (e.g. a project originally estimated at 6 months of work takes almost a year).  Similarly, the December 2010 State of the IT Union survey found that traditional teams were much more likely than agile teams to miss their deadlines.  By having a dependency on the deliverable of a traditional team, an agile team effectively increases their schedule risk.
  3. The traditional team may struggle to deliver something that is “agile friendly”.  Agile teams routinely develop well written, high-quality software that is supported by a robust regression test suite and where needed concise supporting documentation.  Although traditional teams can also choose to deliver similar artifacts very often their code isn’t as well supported by regression tests and their documentation may be overly detailed (and thereby more likely to be out of date and difficult to maintain).  In other words, there is potential for quality risk being injected into the agile team.
  4. The traditional team may not deliver.  There is always the risk that the traditional team doesn’t implement Y, traditional teams often need to reduce the scope of their deliveries in order to meet their commitments, or if they do implement Y it is done too late to be useful any more.

 

There are several strategies the agile team may decide to take:

  1. Negotiate a delivery date with the traditional team. Once the agile team has identified the dependency they should collaborate with the traditional team to determine the implementation schedule for Y.  The agile team now has a release/schedule dependency on the traditional team which is a risk and should be treated as such.  The agile team’s high-level release plan should show a dependency on the delivery of Y and their risk log (if they have one) should also capture this risk.  The agile team should stay in contact with the traditional team throughout construction to monitor the progress of the development of Y.  The agile team should also attempt to negotiate early delivery of Y so that they may integrate with it, and test appropriately, as soon as possible.
  2. Collaborate to develop Y.  One way for the agile team to make it attractive for the traditional team to implement Y earlier than they normally would is to pitch in and help to do the work.
  3. Rework X to remove the dependency.  One of the general strategies discussed in Managing Requirements Dependencies Between Teams was to rework X so that it no longer depended on Y.  This may mean that you reduce the scope of X or it may mean that you deliver part of X now and wait to deliver the rest of X once Y is available.
  4. Reschedule the implementation of X.  Another general strategy is to deprioritize X and implement it after Y is eventually deployed.  This is a realistic option if Y is about to be implemented soon, say in the next few months, but often unrealistic otherwise.
  5. Implement Y.  When the lead time is substantial, the agile team may choose to do the work themselves to implement the functionality.  This can be viable when the agile team has the skills, experience, and resources to do the work.  This strategy runs the risk of Y being implemented twice, once by each team, potentially inconsistently.  To avoid this sort of waste the agile team will want to negotiate with the traditional team to take the work over from them.

 

Scenario 2: A Traditional Team Depends on an Agile/Lean Team

In this scenario X is being implemented by a traditional team and Y by an agile team.  From the point of view of the traditional team, this might be seen as risky for the following reasons:

  1. They may not understand how a disciplined agile team actually works. Many traditional teams are still concerned about the way that they believe agile teams work.  This is often because they perceive agile to be undisciplined or ad-hoc in nature, when the exact opposite is true.  The implication is that the agile team will need to describe to the traditional team how they work, why they work that way, and describe the types of deliverables they will produce.
  2. They may want traditional deliverables from the agile team.  Disciplined agile teams will produce high quality code, a regression test suite for that code, and concise supporting documentation.  Traditional teams may believe that they also want detailed requirements and design specifications, not realizing that the tests produced by the agile team can be considered as executable specifications for the production code.  The implication is that the two teams will need to negotiate what the exact deliverable(s) will be.
  3. They may struggle with any changes to the interface.  Agile teams are used to working in an evolutionary manner where the requirements, design, and implementation change over time.   Traditional teams, on the other hand, will often strive to define the requirements and design up front, baseline them, and then avoid or prevent change to them from that point onwards.  These different mindsets towards change can cause anxiety within the traditional team, the implication being that the agile team may need to be a bit more strict than they usually would be when it comes to embracing change.

The fact is that scenario 2, a traditional team relying on a disciplined agile team, is very likely an order of magnitude less risky than the opposite (scenario 1).   Either scenario will prove to be a learning experience for the two teams, particularly the one that relies on the other team.  Going into the situation with an open mind and a respectful strategy will greatly increase the chance that you’ll work together effectively.

 

Posted by Scott Ambler on: July 21, 2014 07:49 PM | Permalink | Comments (0)

Managing Requirements Dependencies Between Agile and Lean Teams

agile lean

Sometimes functional dependencies occur between requirements that are being implemented by different teams.  For example, requirement X depends on requirement Y and X is being worked on by team A and Y is being worked on by team B.  This generally isn’t a problem when requirement Y is implemented before requirement X, is a bit of an annoyance if they’re being implemented in parallel (the two teams will need to coordinate their work), and an issue if X is being implemented before Y.  For the rest of this posting we will assume that X depends on Y, X is just about to be implemented, and Y has not yet been implemented.  Previously in Managing Dependencies in Agile Teams we discussed strategies for addressing such dependencies, including reordering the work or mocking out the functionality to be provided by Y.  In this posting we explore the implications of managing requirements dependencies between an agile team and a lean team.

Managing requirements dependencies between an agile and lean team is similar to that of managing dependencies between two agile teams, although there are important nuances.  These nuances stem from differences in the ways that agile and lean teams manage their work.  Figure 1 depicts how agile teams do so, organizing work items (including requirements) as a prioritized stack (called a product backlog in Scrum).  Work is pulled off the stack in batches that reflect the amount of work they can do in a single iteration/sprint.  With agile teams the entire stack is prioritized using the same strategy, Scrum teams will prioritize by business value but disciplined agile teams are more likely to consider a combination of business value and risk. Figure 2 shows that lean teams manage their work as an options pool, pulling one work item out of the pool at a time.  Lean teams will prioritize work items on a just in time (JIT) basis, determining which work is the highest priority at the point in time that they pull the work into their process.  As you can see in Figure 2, they will consider a variety of factors when determining what work is the most important right now.

Figure 1. Agile work management strategy.

Work Item List

 

Figure 2. Lean work management strategy.

Work Item Pool

 

When an agile team depends on a lean team challenge is relatively straightforward.  Because lean teams take on work in very small batches, one item at a time, it gives them much more granular control over when they implement something.  As long as the agile team lets them know in a timely manner that the functionality needs to be implemented it shouldn’t be a problem.  For example, if the agile team is disciplined enough to do look-ahead modelling (an aspect of Scrum’s backlog grooming efforts) then they should be able to identify an iteration or two in advance that they have a dependency on the lean team.  At that point the product owner of the agile team should talk with the appropriate person(s) on the lean team to let them know about the dependency so that the lean team can prioritize that work appropriately (perhaps treat it as something to be expedited).

When a lean team depends on an agile team it’s a bit harder, but not much, to address.  This time the challenge is with the batch sizes of the work that the teams take in.  The lean team is taking in work in a very granular manner, one at a time, whereas the agile team is taking in work in small batches (perhaps two weeks worth of work at a time).  From a lean point of view this injects wait time into their process, even though it may just be two weeks, but this wait time is still considered to be waste (muda).  Once again the solution would be for the lean team to identify the dependency ahead of time via look-ahead modelling and negotiate with the agile team.

To summarize, requirements dependencies do in fact occur.  There are strategies to minimize their impact, in particular implementing and better yet deploying the functionality that is being depended upon before the dependent functionality is implemented, but sometimes it just doesn’t work out that way.  So your team will need to be prepared to manage the requirements dependencies that it has on other teams, and similarly be prepared to support other teams with dependencies on them.  In this series of blog postings we’ve seen how Agile<=>Agile and Agile<=>Lean dependencies can be managed, next up is Agile/Lean<=>Traditional.

Posted by Scott Ambler on: July 07, 2014 04:49 AM | Permalink | Comments (0)

Exploring Initial Scope on Disciplined Agile Teams

When a disciplined agile project or product team starts one of the process goals which they will likely need to address is Explore Initial Scope.  This is sometimes referred to as initially populating the backlog in the Scrum community, but as you’ll soon see there is far more to it than just doing that.  This is an important goal for several reasons.  First, your team needs to have at least a high level understanding of what they’re trying to achieve, they just don’t start coding.  Second, in the vast majority of organizations IT delivery teams are asked fundamental questions such as what are you trying to achieve, how long will it take, and how much will it cost.  Having an understanding of the scope of your effort is important input into answering those sorts of questions.

The process goal diagram for Explore Initial Scope is shown below.  The rounded rectangle indicates the goal, the squared rectangles indicate issues or process factors that you may need to consider, and the lists in the right hand column represent potential strategies or practices that you may choose to adopt to address those issues.  The lists with an arrow to the left are ordered, indicating that in general the options at the top of the list are more preferable from an agile point of view than the options towards the bottom.  The highlighted options (bolded and italicized) indicate default starting points for teams looking for a good place to start but who don’t want to invest a lot of time in process tailoring right now.  Each of these practices/strategies has advantages and disadvantages, and none are perfect in all situations, which is why it is important to understand the options you have available to you.

Explore Initial Scope Process Goal

Let’s consider each process issue:

  • Choose the level of detail.  How much effort should you put into capturing the requirements, if any at all.  A small co-located team may find that capturing user stories on index cards to be sufficient, whereas a team that is geographically distributed across several locations will find that it needs to capture point-form notes about each story using an electronic tool, and a team in a life-critical regulatory environment may need to capture even more detail to the point of doing big requirements up front (BRUF).
  • Explore usage. Although much ado has been made of user stories, and they can be applied quite effectively in a range of situations, the fact is that they’re only one of several options for your team to explore usage of the solution (scenarios, personas, and use cases being other options).
  • Explore the domain. Some teams will choose to do some domain modeling via a data model or class diagram, as well as address other views as appropriate.
  • Explore the process. Many teams will discover that they need to explore the overall workflow, or business process, supported by their solution so as to help them better understand their usage requirements.
  • Explore user interface (UI) needs. Many agile teams will also choose to create user interface prototypes, either low-fidelity UI prototypes using paper or even high-fidelity UI prototypes using a prototyping tool or code, particularly when they face a complex domain.
  • Explore general requirements.  There are several types of functional requirements modeling techniques that can be valuable that don’t fit well into the previous categories.
  • Explore non-functional requirements.  How will non-functional requirements pertaining to availablity, security, performance, and many other issues be addressed?  Teams in straightforward situations may find that capturing them as technical stories may be sufficient.  Teams facing technical complexity, and sometimes even domain complexity, soon discover that they need a more sophisticated strategy.
  • Apply modeling strategy(ies).  How will your team go about working with stakeholders to elicit/discover their perceived needs?  Will they hold informal modeling sessions in an agile modeling space?  Will they hold formal modeling sessions, perhaps following a Joint Application Design (JAD) strategy?  Will they interview people one-on-one?  Combinations thereof?
  • Choose a work item management strategy.  Early in the project you will want to determine how you intend to address changing stakeholder needs throughout the project as this will affect how you address the other process issues in this list.  For example, do you intend to adopt Scrum’s value-driven product backlog strategy, DAD’s risk-value driven work item list, a lean work item pool strategy (as followed by DAD’s lean lifecycle), or even a formal approach?  A team in a strict regulatory environment may be required to have a more formal approach to change management than a team without this restriction.

I wanted to share two important observations about this goal.  First, this goal, along with Identify Initial Technical Strategy, Coordinate Activities, and Move Closer to a Deployable Release seem to take the brunt of your process tailoring efforts when working at scale.  It really does seem to be one of those Pareto situations where 20% addresses 80% of the work, more on this in a future blog posting.  As you saw in the discussion of the process issues, the process tailoring decisions that you make regarding this goal will vary greatly based on the various scaling factors.  Second, as with all process goal diagrams, the one above doesn’t provide an exhaustive list of options although it does provide a pretty good start.

I’m a firm believer that a team should tailor their strategy, including their team structure, their work environment, and their process, to reflect the situation that they find themselves in.  When it comes to process tailoring, process goal diagrams not only help teams to identify the issues they need to consider they also summarize potential options available to them.  Agile teams with a minimal bit of process guidance such as this are in a much better situation to tailor their approach that teams that are trying to figure it out on their own.  The DA process decision framework provides this guidance.

Posted by Scott Ambler on: July 17, 2013 07:34 AM | Permalink | Comments (0)

Strategies for Verifying Non-Functional Requirements

Early in the lifecycle, during the Inception phase, disciplined agile teams will invest some time in initial requirements envisioning and initial architecture envisioning. One of the issues to be considered as part of requirements envisioning is to identify non-functional requirement (NFRs), also called quality of service (QoS) or simply quality requirements. The NFRs will drive many of your technical decisions that you make when envisioning your initial architectural strategy. These NFRs should be captured someone and implemented during Construction. It isn’t sufficient to simply implement the NFRs, you must also validate that you have done so appropriately. In this blog posting I overview a collection of agile strategies that you can apply to validate NFRs.

A mainstay of agile validation is the philosophy of whole team testing. The basic idea is that the team itself is responsible for validating its own work, they don’t simply write some code and then throw it over the wall to testers to validate. For organizations new to agile this means that testers sit side-by-side with developers, working together and learning from one another in a collaborative manner. Eventually people become generalizing specialists, T-skilled people, who have sufficient testing skills (and other skills).

Minimally your developers should be performing regression testing to the best of their ability, adopting a continuous integration (CI) strategy in which the regression test suite(s) are run automatically many times a day.  Advanced agile teams will take a test-driven development (TDD) approach where a single test is written just before sufficient production code which fulfills that test.  Regardless of when tests are written by the development team, either before or after the writing of the production code, some tests will validate functional requirements and some will validate non-functional requirements.

Whole team testing is great in theory, and it is strategy that I wholeheartedly recommend, but in some situations it proves insufficient.  It is wonderful to strive to have teams with sufficient skills to get the job done, but sometimes the situation is too complex to allow that.  There are some types of NFRs which require significant expertise to address properly: NFRs pertaining to security, usability, and reliability for example.  To validate these types of requirements, worse yet even to identify them, requires skill and sometimes even specialized (read expensive) tooling.  It would be a stretch to assume that all of your delivery teams will have this expertise and access to these tools.

Recognizing that whole team testing may not sufficiently address validating NFRs many organizations will supplement their whole team testing efforts with parallel independent testing  .  With this approach a delivery team makes their working builds available to a test team on a regular basis, minimally at the end of each iteration, and the testers perform the types of testing on it that the delivery team is either unable or unlikely to perform.  Knowing that some classes of NFRs may be missed by the team, independent test teams will look for those types of defects.  They will also perform pre-production system integration testing and exploratory testing to name a few.  Parallel independent testing is also common in regulatory compliance environments.

From a verification point of view some agile teams will perform either formal or informal reviews.  Experienced agilists prefer to avoid reviews due to their inherently long feedback cycle, which increases the average cost of addressing found defects, in favor of non-solo development strategies such as pair programming and modeling with others.  The challenge with non-solo strategies is that managers unfamiliar with agile techniques, or perhaps the real problem is that they’re still overly influenced by disproved traditional theories of yesteryear, believe that non-solo strategies reduce team productivity.  When done right non-solo strategies increase overall productivity, but the political battle required to convince management to allow your team to succeed often isn’t worth the trouble.

Another strategy for validating NFRs code analysis, both dynamic and static.  There is a range of analysis tools available to you that can address NFR types such as security, performance, and more.  These tools will not only identify potential problems with your code many of them will also provide summaries of what they found, metrics that you can leverage in your automated project dashboards.   This strategy of leveraging tool-generated metrics such as this is a technique which IBM calls Development Intelligence and is highly suggested as an enabler of agile governance in DAD. Disciplined agile teams will include invocation of code analysis tools from you CI scripts to support continuous validation throughout the lifecycle.

Your least effective validation option is end-of-lifecycle testing, in the traditional development world this would be referred to as a testing phase.  The problem with this strategy is that you in effect push significant risk, and significant costs, to the end of the lifecycle.  It has been known for several decades know that the average cost of fixing defects rises the longer it takes you to identify them, motivating you to adopt the more agile forms of testing that I described earlier.  Having said that I still run into organizations in the process of adopting agile techniques that haven’t really made embraced agile, as a result still leave most of their testing effort to the least effective time to do such work.  If you find yourself in that situation you will need to validate NFRs in addition to functional requirements.

To summarize, you have many options for validating NFRs on agile delivery teams.  The secret is to pick the right one(s) for the situation that you find yourself in.  The DA toolkit helps to guide you through these important process decisions, describing your options and the trade-offs associated with each one.

Related Resources

 

Posted by Scott Ambler on: October 23, 2012 07:49 AM | Permalink | Comments (0)
ADVERTISEMENTS

I hate music, especially when it's played.

- Jimmy Durante

ADVERTISEMENT

Sponsors