Project Management

Disciplined Agile

by , , , , , , , , ,
This blog contains details about various aspects of PMI's Disciplined Agile (DA) tool kit, including new and upcoming topics.

About this Blog


View Posts By:

Scott Ambler
Glen Little
Mark Lines
Valentin Mocanu
Daniel Gagnon
Michael Richardson
Joshua Barnes
Kashmir Birk
Klaus Boedker
Mike Griffiths

Recent Posts

The Disciplined Agile Enterprise (DAE) Layer

Disciplined Agile (DA)'s Value Streams Layer

The Disciplined DevOps Layer

Would you like to get involved with the 20th Anniversary of Agile?

The Four Layers of the Disciplined Agile Tool Kit

Disciplined Agile Values for Data Management

Data Management Mindset

There are several values that are key to your success when transforming to a leaner, more agile approach to Data Management. Taking a cue from the Disciplined Agile Manifesto, we’ve captured these values in the form of X over Y. While both X and Y are important, X proves to be far more important than Y in practice. These values are:

  1. Evolution over definition. The ability to safely and quickly evolve an existing data source, either to extend it to support new functionality or to fix quality issues with it, is absolutely paramount in today’s hyper-competitive environment. Yes, having defined data models and metadata describing them is also important, but nowhere near as important as being able to react to new business opportunities. Luckily agile database techniques, long proven in practice, exist that enable the safe evolution of production data stores.
  2. Holistic organization over Data Management. Earlier we said that data is the lifeblood of your organization. Yes, blood is important but so is your skeleton, your muscles, your organs, and many other body parts. We need to optimize the whole organizational body, not just the “data blood.” Traditional Data Management approaches often run aground because they locally optimize for data concerns, whereas a DA approach to Data Management recognizes that we must optimize the overall whole. This implies that sometimes we may need to sub-optimize our strategy from a data point of view, for the sake of organizational level optimization.
  3. Sufficiency over perfection. Data sources, like many other IT assets, need to be good enough for the task at hand. The old saw “perfect is the enemy of good” clearly applies in the data realm – too much time has been lost, and opportunities squandered, while development teams were forced to wait on Data Management teams to create (near) perfect models before being allowed to move forward. Traditional data professionals mistakenly assume that production databases are difficult to evolve and as a result strive to get their designs right the first time so as to avoid very painful database changes in the future. The Agile Data method has of course shown this assumption to be wrong, that it is very straightforward and desirable to evolve production databases. A side effect of this revelation is that we no longer need to strive for perfect, detailed models up front. Instead we can do just enough up-front thinking to get going in the right direction and then evolve our implementation (including data sources) over time as our understanding of our stakeholder needs evolve.
  4. Collaboration over documentation. We’ve known for decades that the most effective way to communicate information is face-to-face around a shared sketching environment, and that the least effective is to provide detailed documentation to people. The implication is that we need to refocus our efforts to be more collaborative in nature. As data professionals we need to get actively involved with solution delivery teams: to share our knowledge and skills with those teams, and to enable them to become more effective in working with data. Yes, we will still need to develop and sustain data-related artifacts, but those artifacts should be lightweight and better yet executable in nature.
  5. Cross-functional people over specialized staff. Agilists have come to realize that people are more effective when they are cross-functional (also known as T-skilled or generalizing specialists). Although specialists are very skilled in a narrow aspect of the overall process, the problem is that you need a lot of specialists to perform anything of value and as a result the overall workflow tends to be error prone, slow, and expensive. The other extreme would be to be a generalist, someone who knows a little bit about all aspects of the overall process. But, the challenge with these people is that although they’re good at collaborating with others they don’t actually have the skills to produce concrete value. We need the best of both worlds – a generalizing specialist with one or more specialties so that they can add value AND a general knowledge so that they can collaborate effectively with others and streamline the overall effort.
  6. Automation over manual processes. The only way that we can respond quickly to marketplace opportunities is to automate as much of the bureaucracy as we possibly can. Instead of creating detailed models and documents and then reviewing potential changes against them we capture our detailed specifications in the form of executable tests. This is quickly becoming the norm for specifying both the requirements and designs of code, and the same test-driven techniques are now being applied to data sources. Continuous integration (CI) and continuous deployment (CD) are also being applied to data sources, contributing to improving overall data quality and decreasing the time to safely deploy database updates into production.

As you can see, we’re not talking about your grandfather’s approach to Data Management. Organizations are now shifting from the slow and documentation-heavy bureaucratic strategies of traditional Data Management towards the collaborative, streamlined, and quality-driven agile/lean strategies that focus on enabling others rather than controlling them.

Recommended Reading

Posted by Scott Ambler on: April 27, 2017 01:09 PM | Permalink | Comments (0)

User Stories For Data Warehouse/Business Intelligence: A Disciplined Agile Approach

Database drum

For teams that are applying agile strategies to Data Warehouse (DW)/Business Intelligence (BI) development is it fairly common for them to take a Disciplined Agile (DA) Approach to DW/BI due to DA’s robustness.  A common question that comes up is how do you write user stories for DW/BI solutions?  Here are our experiences.

First, user stories should focus on business value.  In general, your stories should answer the question “What value is being provided to the user of this solution?”  In the case of a DW/BI solution, they should identify what question the DW/BI solution could help someone to answer, or what business decision the solution could support.  So “As a Bank Manager I would like the Customer Summary Report” or “Get Customer data from CustDB17 and TradeDB” would both be poorly written user stories because they’re not focused on business value.  However, “As a Bank Manager I would like to know what services a given customer currently have with BigBankCo so that I can identify what to upsell them” is.  The solution to implement that story may require you to create the Customer Summary Report (or there may be better ways at getting at that information) and it may require you to get data from CustDB17 and TradeDB (and other sources perhaps).

Here are some examples of user stories for a University DW/BI solution:

  • As a Professor I would like to analyze the current grades of my students so that I can adjust the difficulty of future tests and assignments
  • As a Student I would like to know the drop out rates by course and professor from previous years to determine the likely difficulty of my course choices
  • As a Registrar I would like to know the rate of enrollments within a class over time to determine the popularity of them
  • As a Student I would like to know the estimated travel time between back-to-back classes so that I can determine whether I can make it to class on time

Second, user stories on their own aren’t sufficient.  User stories/epics are only one view into your requirements, albeit an important one.  You’ll also want to explore the domain (e.g. do some data modelling), the user interface (e.g. explore what reports should look like), the business process (e.g. what are the overall business process(es) supported by your DW/BI solution), and technical views (e.g. how does the data flow through your solution architecture).

Third, data requirements are best addressed by domain modelling.  As you are exploring the requirements for your DW/BI solution you will hear about data-oriented requirements.  So capture them in your domain model as those sorts of details emerge over time.  Consider reading Agile/Evolutionary Data Modeling: From Domain Modeling to Physical Data Modeling for a detailed discussion of this topic.

Fourth, technology issues are best captured in architecture models.  You will also hear about existing legacy data sources and in general you will need to capture the architecture of your DW/BI solution.  This sort of information is best captured in architecture models, not in stories.

A few more thoughts:

  1. User stories are only one option for usage modelling.  There are several ways that we can explore usage, user stories/epics are just one way.  You could also create light-weight use cases, usage scenarios, or personas to name a few strategies.
  2. Take a usage-driven approach.  The primary modelling artifact on a Disciplined Agile DW/BI team is the usage model (e.g. your user stories) not the data model.  Data modelling is a secondary consideration compared with usage modelling, and this can be a difficult concept for experienced DW/BI professionals to come to terms with.
  3. Keep your initial modelling light-weight.  The agile rules still apply to DW/BI solutions – keep your modelling efforts sufficient for the task at hand and no more.
  4. Get trained in this.  This is a complex topic.  If you’re interested in training, we suggest that you consider DA 210: Disciplined Agile Data Warehouse (DW)/Business Intelligence (BI) Workshop.
Posted by Scott Ambler on: April 04, 2016 11:36 PM | Permalink | Comments (0)

Disciplined Agile Data Management: A Goal-Driven Approach

Categories: Data Management, DW/BI

This posting, the latest in a series focused on a disciplined agile approach to data management, overviews the activities that a disciplined agile data management team may perform. The Disciplined Agile (DA) toolkit promotes an adaptive, context-sensitive strategy to data management.  DAD does this via its goal-driven approach that indicates the process factors you need to consider, a range of techniques or strategies for you to address each process factor, and the advantages and disadvantages of each technique.  In this blog we present the goal diagram for the Data Management process blade and overview its process factors.

The following diagram overviews the potential activities associated with disciplined agile data management.

Goal - IT - Data ManagementThe process factors that you need to consider for data management are:

  1. Improve data quality.  There is a range of strategies that you can adopt to ensure data quality.  The agile community has developed concrete quality techniques – in particular database testing, continuous database integration, and database refactoring – that prove more effective than traditional strategies.  Meta data management (MDM) proves to be fragile in practice as the overhead of collecting and maintaining the meta data proves to be far greater than the benefit of doing so.  Extract transform and load (ETL) strategies are commonplace for data warehouse (DW) efforts, but they are in effect band-aids that do nothing to fix data quality problems at the source.
  2. Evolve data assets.  There are several categories of data that prove to be true assets over the long term: Test data that is used to support your testing efforts; Reference data, also called lookup data, that describes relatively static entities such as states/provinces, product categories, or lines of business; Master data that is critical to your business, such as customer or supplier data; Meta data, which is data about data. Traditional data management tends to be reasonably good at this, although can be heavy handed at times and may not have the configuration management discipline that is common within the agile community.
  3. Ensure data security.  This is a very important aspect of security in general.  The fundamental issue is to ensure that people get access to only the information that they should and that information is not available to people who shouldn’t have it.  Data security must be addressed at both the virtual and physical levels.
  4. Specify data structures.  At the enterprise level your models should be high level – lean thinking is that the more complex something is, the less detailed your models should be to describe it.  This is why it is better to have a high-level conceptual model than a detailed enterprise data model (EDM) in most cases.  Detailed models, such as physical data models (PDMs), are often needed for specific legacy data sources by delivery teams.
  5. Refactor legacy data sources. Database refactoring is a key technique for safely improving the quality of your production databases.  Where delivery teams will perform the short term work of implementing the refactoring, there is organizational work to be done to communicate the refactoring, monitor usage of deprecated schema, and eventually remove deprecated schema and any scaffolding required to implement the refactoring.
  6. Govern data.  Data, and the activities surrounding it, should be governed within your organization.  Data governance is part of your overall IT governance efforts.

Looking at the diagram above, traditional data management professionals may believe that some activities are missing.  These activities may include:

  • Enterprise data architecture.  This is addressed by the Enterprise Architecture process blade.  The DA philosophy is to optimize the whole.  When data architecture (or security architecture, or network architecture, or…) is split out from EA it often tends to be locally optimized and as a result does not fit well with the rest of the architectural vision.
  • Operational database administration.  This is addressed by the Operations process blade, once again to optimize the operational whole over locally optimizing the “data part.”

Future blog postings in this series will explore the workflow associated with data management.


Related Resources


Posted by Scott Ambler on: March 04, 2016 05:17 AM | Permalink | Comments (0)

Why Data Management?

Categories: Data Management, DevOps, DW/BI

Database drum

According to the Data Management Body of Knowledge, data management is “the development, execution and supervision of plans, policies, programs and practices that control, protect, deliver and enhance the value of data and information assets.”  In our opinion this is a very good definition, unfortunately the implementation of data management strategies tends to be challenged in practice due to the traditional, documentation-heavy mindset. This mindset tends to result in onerous, bureaucratic strategies that more often than not struggle to support the goals of your organization.

Having said that, data management is still very important to the success of your organization.  The Disciplined Agile (DA) toolkit promotes a pragmatic, streamlined approach to data management that fits into the rest of your IT processes – we need to optimize the entire workflow, not sub-optimize our data management strategy.  We need to support the overall needs of our organization, producing real value for our stakeholders. Disciplined agile data management does this in an evolutionary and collaborative manner, via concrete data management strategies that provide the right data at the right time to the right people.

There are several reasons why a disciplined agile approach data management is important:

  1. Data is the lifeblood of your organization.  Without data, or more accurately information, you quickly find that you cannot run your business. Having said that, data is only one part of the overall picture.  Yes, blood is important but so is your skeleton, your muscles, your organs, and many other body parts.  We need to optimize the whole organizational body, not just the “data blood.”
  2. Data is a corporate asset and needs to be treated as such.    Unfortunately the traditional approach to data management has resulted in data with sketchy quality, data that is inconsistent, incomplete, and is often not available in a timely manner.  Traditional strategies are too slow moving and heavy-weight to address the needs of modern, lean enterprises.  To treat data like a real asset we must adopt concrete agile data quality techniques such as database regression testing to discover quality problems and database refactoring to fix them.  We also need to support delivery teams with lightweight agile data models and agile/lean data governance.
  3. People deserve to have appropriate access to data in a timely manner. People need access to the right data at the right time to make effective decisions.  The implication is that your organization must be able to provide the data that an individual should have access to in a streamlined and timely manner.
  4. Data management must be an enabler of DevOps.  As you can see in the following diagram, Data Management is an important part of our overall Disciplined DevOps strategy. A successful DevOps approach requires you to streamline the entire flow between delivery and operations, and part of that effort is to evolve existing production data sources to support new functionality.

Disciplined DevOps

In future blog postings we will explore the goal diagram of the Data Management process blade and the associated workflow.

Related Resources

Posted by Scott Ambler on: March 02, 2016 11:46 AM | Permalink | Comments (0)

"Every child is an artist. The problem is how to remain an artist once he grows up."

- Pablo Picasso



Vendor Events

See all Vendor Events