Project Management

Disciplined Agile

by , , , , , , ,
#ChooseYourWoW | #ContinuousImprovement | #Kaizen | #ProcessImprovement | Adoption | agile | Agile certification | agile transformation | Analogy | Architecture | architecture | book | Business Agility | Certification | Choose your WoW | CMMI | Coaching | Collaboration | Compliancy | Configuration management | Construction phase | Context | Continuous Improvement | COVID-19 | Culture | culture | DAD | DAD discussions | DAD roles | Data Management | database | DevOps | Discipline | disciplined agile delivery | Documentation | DW/BI | Enterprise Agile | Enterprise Architecture | Enterprise Awareness | Essence | Evolving DA | Experiment | Financial | GDD | Geographic Distribution | global development | Goal-Driven | goal-driven | goals | Governance | Guideline | Hybrid | Improvement | inception | Inception phase | Kanban | Large Teams | Lean | Lifecycle | lifecycle | Metrics | mindset | News | News and events | Non-Functional Requirements | non-functional requirements | Operations | Outsourcing | People | Philosophies | Planning | PMI | PMI and DA | Portfolio Management | Practices | Principle | Process | process improvement | Product Management | Product Owners | Program Management | Project Management | Promise | quality | Release Management | Requirements | requirements | Reuse Engineering | Risk management | RUP | Scaling | scaling | scaling agile | Scrum | Support | Surveys | Teams | Technical Debt | Terminology | Testing | testing | Toolkit | Transformation | Workflow | show all posts

About this Blog

RSS

View Posts By:

Scott Ambler
Glen Little
Mark Lines
Valentin Mocanu
Daniel Gagnon
Michael Richardson
Joshua Barnes
Kashmir Birk

Recent Posts

What Does it Mean to Be Awesome?

Failure Bow: Choosing Between Life Cycles Flowchart Update

Evolving Disciplined Agile: Guidelines of the DA Mindset

Evolving Disciplined Agile: Promises of the DA Mindset

Evolving Disciplined Agile: Principles of the DA Mindset

Information Security: You Have Choices

Security process blade

Security is one of the process blades of Disciplined DevOps. The focus of the Security process blade is to describe how to protect your organization from both information/virtual and physical threats. This includes procedures for security governance, identity and access management, vulnerability management, security policy management, incident response, and vulnerability management. As you would expect these policies will affect your organization’s strategies around change management, disaster recovery and business continuity, solution delivery, and vendor management. For security to be effective it has to be a fundamental aspect of your organizational culture.

The following process goal diagram overviews the potential activities associated with disciplined agile security. These activities are performed by, or at least supported by, your security (often called an information security or infosec) team.

Figure 1. The Security process goal diagram (click to enlarge).

Security process goal diagram

The process factors that you need to consider for implementing effective security are:

  1. Ensure security readiness. How do you ensure that your environment has been built to withstand the evolving security threats that you face?  
  2. Enable security awareness. How do you help your staff to become knowledgeable about security threats, how to avoid attacks, and how to deal with them when they occur?
  3. Monitor security. How do you identify when you are under attack (for most organizations the answer is constantly) and more importantly how you’re being attacked?
  4. Respond to threats. When an attack occurs what will you do to address it?
  5. Security physical assets. How will you protect physical assets such as buildings, vehicles, and equipment?  By implication, how will you ensure the security of your people?
  6. Secure IT perimeter. How will you secure access to your IT systems?
  7. Secure the network. How will you ensure the security of digital communications?
  8. Secure IT endpoints. How will you secure access to devices such as phones, workstations, and other I/O devices?
  9. Secure applications. How will you address security within the applications/systems of your organization?
  10. Secure data. How will you ensure the validity and privacy of the data within your organization?
  11. Govern security. How will you motivate, enable, and monitor security activities within your organization?

Further Reading

Posted by Scott Ambler on: August 07, 2019 06:03 AM | Permalink | Comments (0)

Database DevOps at Agile 2018

On Tuesday, August 7 I facilitated a workshop about Database DevOps at the Agile 2018 conference in San Diego.  I promised the group that I would write up the results here in the blog. This was an easy promise to make because I knew that we’d get some good information out of the participants and sure enough we did.  The workshop was organized into three major sections:

  1. Overview of Disciplined DevOps
  2. Challenges around Database DevOps
  3. Techniques Supporting Database DevOps

Overview of Disciplined DevOps

We started with a brief overview of Disciplined DevOps to set the foundation for the discussion. The workflow for Disciplined DevOps is shown below.  The main message was that we need to look at the overall DevOps picture to be successful in modern enterprises, that it was more that Dev+Ops.  Having said that, our focus was on Database DevOps.The workflow of Disciplined DevOps

Challenges around Database DevOps

We then ran a From/To exercise where we asked people to identify what aspects of their current situation that they’d like to move away from and what they’d like to move their organization towards.  The following two pictures (I’d like to thank Klaus Boedker for taking all of the following pics) show what we’d like to move from/to respectively (click on them for a larger version).

Database DevOps Moving From

 

Database DevOps Moving To

I then shared my observations about the challenges with Database DevOps, in particular the cultural impedance mismatch between developers and data professionals, the quality challenges we face regarding data, the lack of testing culture and knowledge within the data community, and the mistaken belief that it’s difficult to evolve production data source.

 

Techniques Supporting Database DevOps

The heart of the workshop was to explore technical techniques that support database DevOps.  I gave an overview of several Agile Data techniques so as give people an understanding of how Database DevOps works, then we ran an exercise.  In the exercise each table worked through one of six techniques (there are several supporting techniques that the groups didn’t work through), exploring:

  • The advantages/strengths of the technique
  • The disadvantages
  • How someone could learn about that technique
  • What tooling support (if any) is needed to support the technique.

Each team was limited to their top three answers to each of those questions, and each technique was covered by several teams.  Each of the following sections has a paragraph describing the technique, a picture of the Strategy Canvas the participants created, and my thoughts on what the group produced. It’s important to note that the some of the answers in the canvases contradict each other because each canvas is the amalgam of work performed by a few teams, and each of the teams may have included people completely new to the practice/strategy they were working through.

 

Vertical Slicing

Just like you can vertically slice the functional aspects of what you’re building, and release those slices if appropriate, you can do the same for the data aspects of your solution.  Many traditional data professionals don’t know how to do this, in most part because traditional data techniques are based on waterfall-style development where they’ve been told to think everything through up front in detail.  The article Implementing a Data Warehouse via Vertical Slicing goes into this topic in detail.

Database DevOps - Vertical Slicing

The advantages of vertical slicing is that it enables you to get something built and into the hands of stakeholders quickly, thereby reducing the feedback cycle.  The challenge with it is that you can lose sight of the bigger picture (therefore you want to do some high-level modeling during Inception to get a handle on the bigger picture). To be successful at vertically slicing your work, you need to be able to incrementally model, or better yet agilely model, and implement that functionality.

 

Agile Data Modeling

There’s nothing special about data modelling, you can perform it in an agile manner just like you can model other things in an agile manner.  Once again, this is a critical skill to learn and can be challenging for traditional data professionals due to their culture around heavy “big design up front (BDUF)”.  The article Agile Data Modelling goes into details, and more importantly an example, for how to do this.

Database DevOps - Agile Data Modeling
The advantages of this technique is that you can focus on what you need to produce now and adapt to changing requirements.  The disadvantages are that existing data professionals are resistant to evolutionary strategies such as this, often because they prefer a heavy up-front approach.  To viably model in an agile manner, including data, you need to be able to easily evolve/refactor the thing that you’re modelling.

 

Database Refactoring

A database refactoring is a simple change to your database that improves the quality of its design without changing the semantics of the design (in a practical manner).  This is a key technique because it enables you to safely evolve your database schema, just like you can safely evolve your application code.  Many traditional data professionals believe that it is very difficult and risky to refactor a database, hence their penchant for heavy up-front modeling, but this isn’t actually true in practice.  To understand this, see the article The Process of Database Refactoring which summarizes material from the award-winning book Refactoring Databases.

Database DevOps - DB Refactoring
Database refactoring is what enables you to break the paradigm of “we can’t change the database” with traditional data professionals.  This technique is what enables data professionals to rethink, and often abandon, most of their heavy up-front strategies from the 1970s.  DB refactoring does require skill and tooling support however.  Just like you need automated tests to safely refactor your code, to safely refactor your database you need to have an automated regression test suite.

 

Automated Database Regression Testing

If data is a corporate asset then it should be treated as such.  Having an automated regression test suite for a data source helps to ensure that the functionality and the data within a database conforms to the shared business rules and semantics for it.  For more information, see the article Database Testing.

Database DevOps - DB Testing
An automated test suite enables your teams to safely evolve their work because if they break something the automated tests are likely to find the problem.  This is particularly important given that many data sources are resources shared across many applications.  Like automated testing for other things, it requires skill and tooling to implement. To effectively regression test your database in an automated manner you need to include those tests in your continuous integration (CI) approach.

 

Continuous Database Integration

Database changes, just like application code changes, should be brought into your continuous integration (CI) strategy. It is a bit harder to include a data source because of the data.  The issue is side effects from tests – in theory a database test should put the db into a known state, do something, check to see if you get the expected results, then put the DB back into the original state.  It’s that last part that’s the problem because all it takes is one test to forget to do so and there’s the potential for side effects across tests. So, a common thing is to rebuild (or restore, or a combination thereof) your dev and test data bases every so often so as to decrease the chance of this.  You might choose to do this in your nightly CI run for example. For more information, see the book Recipes for Continuous Database Integration.

Database DevOps - Continuous DB Integration 

Operational Data Monitoring

An important part of Operations is to monitor the running infrastructure, including databases.  This information can and should be available via real-time dashboards as well as through ad-hoc reporting.  Sadly, I need to write an article on this still.  But if you poke around the web you’ll find a fair bit of information.  Article to come soon.

Database DevOps - Operational Monitoring

 

Concluding Thoughts

This was a really interesting workshop.  We did it in 75 minutes but it really should have been done in a half day to allow for more detailed discussions about each of the techniques.  Having said that, I had several very good conversations with people following the workshop about how valuable and enlightening they found it.

This workshop, plus other training and service offerings around agile database and agile data warehousing skills, is something that we can provide to your organization.  Feel free to reach out to us.

Posted by Scott Ambler on: August 09, 2018 08:54 AM | Permalink | Comments (0)

Database DevOps at Agile 2018

On Tuesday, August 7 I facilitated a workshop about Database DevOps at the Agile 2018 conference in San Diego.  I promised the group that I would write up the results here in the blog. This was an easy promise to make because I knew that we’d get some good information out of the participants and sure enough we did.  The workshop was organized into three major sections:

  1. Overview of Disciplined DevOps
  2. Challenges around Database DevOps
  3. Techniques Supporting Database DevOps

Overview of Disciplined DevOps

We started with a brief overview of Disciplined DevOps to set the foundation for the discussion. The workflow for Disciplined DevOps is shown below.  The main message was that we need to look at the overall DevOps picture to be successful in modern enterprises, that it was more that Dev+Ops.  Having said that, our focus was on Database DevOps.The workflow of Disciplined DevOps

Challenges around Database DevOps

We then ran a From/To exercise where we asked people to identify what aspects of their current situation that they’d like to move away from and what they’d like to move their organization towards.  The following two pictures (I’d like to thank Klaus Boedker for taking all of the following pics) show what we’d like to move from/to respectively (click on them for a larger version).

Database DevOps Moving From

 

Database DevOps Moving To

I then shared my observations about the challenges with Database DevOps, in particular the cultural impedance mismatch between developers and data professionals, the quality challenges we face regarding data, the lack of testing culture and knowledge within the data community, and the mistaken belief that it’s difficult to evolve production data source.

 

Techniques Supporting Database DevOps

The heart of the workshop was to explore technical techniques that support database DevOps.  I gave an overview of several Agile Data techniques so as give people an understanding of how Database DevOps works, then we ran an exercise.  In the exercise each table worked through one of six techniques (there are several supporting techniques that the groups didn’t work through), exploring:

  • The advantages/strengths of the technique
  • The disadvantages
  • How someone could learn about that technique
  • What tooling support (if any) is needed to support the technique.

Each team was limited to their top three answers to each of those questions, and each technique was covered by several teams.  Each of the following sections has a paragraph describing the technique, a picture of the Strategy Canvas the participants created, and my thoughts on what the group produced. It’s important to note that the some of the answers in the canvases contradict each other because each canvas is the amalgam of work performed by a few teams, and each of the teams may have included people completely new to the practice/strategy they were working through.

 

Vertical Slicing

Just like you can vertically slice the functional aspects of what you’re building, and release those slices if appropriate, you can do the same for the data aspects of your solution.  Many traditional data professionals don’t know how to do this, in most part because traditional data techniques are based on waterfall-style development where they’ve been told to think everything through up front in detail.  The article Implementing a Data Warehouse via Vertical Slicing goes into this topic in detail.

Database DevOps - Vertical Slicing

The advantages of vertical slicing is that it enables you to get something built and into the hands of stakeholders quickly, thereby reducing the feedback cycle.  The challenge with it is that you can lose sight of the bigger picture (therefore you want to do some high-level modeling during Inception to get a handle on the bigger picture). To be successful at vertically slicing your work, you need to be able to incrementally model, or better yet agilely model, and implement that functionality.

 

Agile Data Modeling

There’s nothing special about data modelling, you can perform it in an agile manner just like you can model other things in an agile manner.  Once again, this is a critical skill to learn and can be challenging for traditional data professionals due to their culture around heavy “big design up front (BDUF)”.  The article Agile Data Modelling goes into details, and more importantly an example, for how to do this.

Database DevOps - Agile Data Modeling
The advantages of this technique is that you can focus on what you need to produce now and adapt to changing requirements.  The disadvantages are that existing data professionals are resistant to evolutionary strategies such as this, often because they prefer a heavy up-front approach.  To viably model in an agile manner, including data, you need to be able to easily evolve/refactor the thing that you’re modelling.

 

Database Refactoring

A database refactoring is a simple change to your database that improves the quality of its design without changing the semantics of the design (in a practical manner).  This is a key technique because it enables you to safely evolve your database schema, just like you can safely evolve your application code.  Many traditional data professionals believe that it is very difficult and risky to refactor a database, hence their penchant for heavy up-front modeling, but this isn’t actually true in practice.  To understand this, see the article The Process of Database Refactoring which summarizes material from the award-winning book Refactoring Databases.

Database DevOps - DB Refactoring
Database refactoring is what enables you to break the paradigm of “we can’t change the database” with traditional data professionals.  This technique is what enables data professionals to rethink, and often abandon, most of their heavy up-front strategies from the 1970s.  DB refactoring does require skill and tooling support however.  Just like you need automated tests to safely refactor your code, to safely refactor your database you need to have an automated regression test suite.

 

Automated Database Regression Testing

If data is a corporate asset then it should be treated as such.  Having an automated regression test suite for a data source helps to ensure that the functionality and the data within a database conforms to the shared business rules and semantics for it.  For more information, see the article Database Testing.

Database DevOps - DB Testing
An automated test suite enables your teams to safely evolve their work because if they break something the automated tests are likely to find the problem.  This is particularly important given that many data sources are resources shared across many applications.  Like automated testing for other things, it requires skill and tooling to implement. To effectively regression test your database in an automated manner you need to include those tests in your continuous integration (CI) approach.

 

Continuous Database Integration

Database changes, just like application code changes, should be brought into your continuous integration (CI) strategy. It is a bit harder to include a data source because of the data.  The issue is side effects from tests – in theory a database test should put the db into a known state, do something, check to see if you get the expected results, then put the DB back into the original state.  It’s that last part that’s the problem because all it takes is one test to forget to do so and there’s the potential for side effects across tests. So, a common thing is to rebuild (or restore, or a combination thereof) your dev and test data bases every so often so as to decrease the chance of this.  You might choose to do this in your nightly CI run for example. For more information, see the book Recipes for Continuous Database Integration.

Database DevOps - Continuous DB Integration 

Operational Data Monitoring

An important part of Operations is to monitor the running infrastructure, including databases.  This information can and should be available via real-time dashboards as well as through ad-hoc reporting.  Sadly, I need to write an article on this still.  But if you poke around the web you’ll find a fair bit of information.  Article to come soon.

Database DevOps - Operational Monitoring

 

Concluding Thoughts

This was a really interesting workshop.  We did it in 75 minutes but it really should have been done in a half day to allow for more detailed discussions about each of the techniques.  Having said that, I had several very good conversations with people following the workshop about how valuable and enlightening they found it.

This workshop, plus other training and service offerings around agile database and agile data warehousing skills, is something that we can provide to your organization.  Feel free to reach out to us.

Posted by Scott Ambler on: August 09, 2018 08:54 AM | Permalink | Comments (0)

Building Your IT Support Environment

Categories: DevOps, Support

An important aspect of Support that is easily forgotten is the need to build out your infrastructure to enable your support efforts.  This may include:

  • Creating a support knowledgebase so that your Support Engineers can capture solutions to the problems they solve.
  • Provide access to the support knowledgebase to support self-service by end users.  This access is often limited for privacy reasons – end users should have access to solutions to common problems but not the details to specific incidents.
  • A support environment to simulate problems.  In some cases, such as an online trading system perhaps, you don’t want your Support Engineers trying to diagnose end user problems on the live system itself due to potential side effects of doing so.
  • Installing communication systems such as chat software and a phone/call in system.
  • Automated support systems such as integrated voice response (IVR) and artificial intelligence (AI)/bots

Figure 1. High-level architecture for a Support environment (click on it for larger version).Support desk architecture

Posted by Scott Ambler on: November 19, 2017 11:01 AM | Permalink | Comments (0)

The Lean IT Operations Mindset

Mindset

The Disciplined Agile (DA) toolkit describes strategies for how an organization’s IT group can support a lean enterprise.  An important part of this is to have an effective IT operations strategy, and to do that the people involved need to have what we call a “lean IT operations mindset.”  The philosophies behind such a mindset include:

  1. Run a trustworthy IT ecosystem.  At a high level the goal is to “keep the lights on.”  At a detailed level anyone responsible for IT operations wants to run an IT ecosystem that is sufficiently secure, resilient, available, performant, usable, and environmentally friendly.  Part of running a trustworthy ecosystem is monitoring running services so as to identify and hopefully avoid potential problems before they occur.  For some systems, and perhaps for your IT ecosystem as a whole, you may have service level agreements (SLAs) in place with your end users that guarantee a minimum level of trustworthiness.
  2. Focus on the strategic (long-term) over the tactical (short-term).  Anyone responsible for IT operations needs to have a very good understanding between the long-term implications of a decision versus the short-term conveniences.  A classic example of this right now is the preference of people building micro-services to use what they believe to be the best technologies for each service.  This makes a lot of sense from the narrow viewpoint of that service and it often proves to be incredibly convenient, and fun, for the developers because they often get to work with new technologies.  However, from an operational point of view you end up with a mishmash of technologies that must be operated and evolved over time, resulting in a potential maintenance nightmare.  Yes, you will still make some short-term decisions but you should do so intelligently.  Too great a focus on the long term results in a stagnant IT ecosystem, too great a focus on short-term decisions results in operations teams who spend all their time fighting fires.  The long-term technical vision for your organization is developed by your Enterprise Architecture efforts and the long-term business vision comes from your Product Management activities.
  3. Streamline the overall flow of work.  This arguably should be part of everyone’s mindset, but it is particularly important for people doing IT operations work.  IT operations has traditionally been a bottleneck in many organizations, often the result of the need to run a trustworthy ecosystem and to focus on long-term considerations, hence the need to focus on streamlining the overall flow of work. BUT, this isn’t just operational work that we need to streamline, but the overall flow of work into, within, and out of IT operations.  In this case we need a disciplined approach to DevOps that takes all aspects of the development-operations lifecycle into account, including the support of multiple development lifecycles (not just continuous delivery), the release management process, and the operational aspects of data management.  Of course, streamlining the flow of work goes beyond development-operations and is an important goal of your organization’s continuous improvement strategy.
  4. Help end-users succeed.  An important goal of people performing operations activities is to ensure that your end users are successfully using your IT systems.  It doesn’t matter how well your systems are built, or how trustworthy they are, if your end users are unable or unwilling to use them effectively.  End users are going to need help – you need to be prepared to provide a support function.
  5. Standardization without stagnation.  The more standardized your IT ecosystem is the easier it will be to run, to release new functionality into, and to find and fix problems if they should arise.  However, too much standardization can lead to stagnation where it becomes very difficult to evolve your ecosystem.  You will need to work very closely with people performing enterprise architecture and product management activities to ensure that you understand the long term vision and are working towards it.
  6. Regulate releases into production.   Most DevOps strategies reflect the viewpoint of a single product team.  But what about the viewpoint of your overall IT ecosystem, which may comprise hundreds of products?  An interesting question to ask is what is the WIP limit for releases across your overall ecosystem?  In other words, what rate of change can your infrastructure, and your stakeholder community, bear?  In the Disciplined Agile (DA) toolkit this philosophy is an important driver of the Release Management process blade.  Furthermore, some regulatory compliance regimes call out a separation of concerns pertaining to release management – the people building a product are not allowed to release the product into production, someone else must make that decision and do the work (even if “the work” is merely pressing a button to run a script).
  7. Sufficient documentation.  Yes, there will be some documentation maintained about your IT ecosystem.  Hopefully this documentation is concise, accurate, and high-level.  Common documentation includes an overview(s) of your infrastructure, release procedures (even if fully automated, there’s still some overview documentation and training), and high-level views of critical aspects of your infrastructure including security, data architecture, and network architecture.  Organizations that operate in regulated industries will of course need to comply to the documentation requirements of the appropriate regulations.  When infrastructure components are discoverable and self-documenting there is a lesser need for external documentation, but there is still a need.  Any documentation that you do create should be maintained under configuration management (CM) control.

Future blog postings in this series about IT operations and support will explore topics such as why you need IT operations and support, what activities you perform, and the workflow of doing such.

Posted by Scott Ambler on: June 01, 2016 10:13 AM | Permalink | Comments (0)
ADVERTISEMENTS

"Far out in the uncharted backwaters of the unfashionable end of the Western Spiral arm of the galaxy lies a small unregarded yellow sun. Orbiting this at a distance of roughly 98 million miles is an utterly insignificant little blue-green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea..."

- Douglas Adams

ADVERTISEMENT

Sponsors