As our global world continues to shrink nearly proportionally to how technology tends to grow, so does our ability to exploit virtual environments to maximize economies of scale, reach broader audiences and make personal connections. This article discusses what PMs should consider when planning virtual training.
Architecture and Design Practices for Agile Project Management
A major principle within lean and agile methods is “No Big Design Up Front (BDUF)”. Instead, agile teams promote the notion of evolutionary, emergent and iterative design. Proponents of traditional methods believe a comprehensive top-down architecture and design are tantamount to system quality and project success. However, the lean and agile community has since learned that large architectures and designs are a form of “waste.” That is, traditional teams over-specify the system architecture and design by gold-plating it with unnecessary features that undermine system quality and project success.
How do agile teams perform architecture and design? Is there such a thing? What are its associated practices? When is it performed? How much effort is applied? Try to remember some of the characteristics of agile projects. Their goal is to rapidly produce a precious few system capabilities and customer requirements that have the biggest bang for the buck. That is, produce something with high business value or great return on investment. Furthermore, system quality, project success and customer satisfaction are ultimately achieved by significantly reducing the scope of the system architecture and design.
Less is more for agile projects, which address a smaller scope and have fewer requirements and shorter time horizons. They focus on a few critical customer needs, business functions and capabilities. They are optimized for project success, system quality, customer satisfaction and business value. Conversely, traditional methods are based on the theory of comprehensive, all-encompassing architectures and designs to anticipate every conceivable customer need. However, “wisdom has proven right by her children” as traditional project failure rates were too high, resulting in agile methods.
Given these assumptions, the scope of an agile project is limited to a near-term release plan spanning nine, 12 or 15 months (give or take a few). Therefore, an agile architecture and design should be right-sized to fit the scope of the release plan and no more. This is true whether the architecture is created in a traditional top-down or agile bottom-up style. This means that an agile architecture and design can be visualized within the initial release planning phase when a lightweight plan is created--and the most business value to a customer is achievable. Here are some of the practices for agile architecture and design:
Prior Experience (Prior Experience/Architecture Reference Designs/No Big Up-Front Design Necessary): Agile, like traditional methods, assume developers have prior, pre-defined domain expertise and knowledge of architecture and design solutions. Agile developers should also have a history of past-performance, working knowledge and experience with similar solutions (i.e., I’ve already built data warehouses before, so I intuitively know what to do). Therefore, agile methods assume that a large, up-front architecture and design phase is not necessary. That is, a chief architect or designer can simply begin implementing customer requirements (user stories) to realize a preconceived architecture. (This is an important assumption. For instance, if you need to build a petabyte-scale solution, a domain expert will know that a terabyte-scale technology such as a relational database management system won’t work.)
- Case Study: On a recent agile project, prior experience with technologies, architectures and designs necessary to satisfy high-performance user stories proved invaluable. Developers were faced with creating a petabyte-scale solution. They knew typical terabyte-scale servers and technologies would not suffice. They devised a high-level architecture, purchased custom computers and obtained middleware necessary for petabyte-scale processing. They then assembled the system, connected it to the enterprise IT infrastructure, loaded it with data and rapidly created a prototype to demonstrate the ability to perform petabyte-scale processing. Their success came on the heels of several failed traditional projects that attempted to use terabyte-scale technologies for petabyte-scale user stories. The project was highly successful and demonstrated the importance of finding developers with the right experience for highlyspecialized requirements.
Release Planning (Visioning/Strategic Planning/Roadmapping/Capabilities Analysis): Release planning is a lean, lightweight and flexible form of project planning. A number of critical products are created during this stage, such as vision statements, project charters, scope statements and high-level customer requirements. Customer needs may be in the form of enterprise-level capabilities (i.e., very large requirements). These are often called epics or themes. User stories may be grouped into capabilities, epics and themes. Each release may then address one of these groupings. A release plan encompasses multiple groupings over a period of nine, 12 or 15 months (more or less). Multiple teams may be necessary for larger projects (i.e., customer teams, product planning teams and project-level teams). Frequently communicating vision statements is a critical success factor and release planning may be used to create high-level architectures (if necessary).
- Case Study: On a recent agile project, a vision statement, project charter, scope statement and system metaphor was created for developers. System examples were provided to developers, along with commercial-off-the-shelf components and web services. A small list of user stories were created and prioritized related to the vision, product description and system metaphor. The product owner described the product to be built, answered numerous questions and urged the team to “git-r-done” above all else (i.e., spare the bureaucratic practices associated with traditional methods). Developers suggested new user stories; they were reprioritized, and a release plan was created. Developers took it upon themselves to create iteration plans, perform day-to-day self-organization, and take responsibility for implementation details. The team successfully completed the product on time and on budget and cited the clear vision as the most important success factor.
Iteration Zero (Early Architectural Iterations/Early Iterations to Explore Technological Alternatives): Agile teams may establish the IT infrastructure before iterations begin. They may stand up servers, operating systems, middleware, database services, GUIs, utilities and other development tools in advance. They do this in what is known as “iteration zero”, where the first iteration is consumed standing up the IT infrastructure upon which the application is built. Agile teams may even devote two or three iterations for standing up a more complex IT infrastructure. If the system architecture proves to be a sticking point, agile teams may even devote a few iterations to establishing an architecture (i.e., best alternatives for an anti-lock braking system, avionics architecture, network infrastructure, data warehouse, petabyte-scale solution, etc.). The first few iterations may be used to explore technological alternatives (i.e., which commercial web services to use?).
- Case Study: On one agile project, developers had the technical skills to stand up an IT infrastructure. A development environment, tools and technologies were suggested by the customer. The developers stood up servers, operating systems, web servers, middleware, databases and development tools. They also procured Wikis, agile project management tools, version control software and other collaboration tools. Much of this activity was performed during the release planning phase. However, the customer allowed an “iteration zero” as a means of finishing the IT infrastructure. The team rapidly prototyped a proof of concept to verify the IT infrastructure worked. Although this was a successful project, the team felt the customer should have stood up the IT infrastructure to minimize the intense activity prior to and during “iteration zero”. Common IT infrastructure is an oft-cited success factor, especially for distributed teams such as this one.
Emergent Design (User-Story-to-User Story Design/Design Patterns/Refactoring/Generalizing): Emergent designs evolve user story to user story and iteration to iteration. A small set of prioritized user stories are implemented regardless of dependencies between them. Developers do not logically order user stories or iterations, or make other architectural assumptions. Emergent design has proven to work in many cases. User stories describe customer needs that encompass a vertical slice of functionality from the GUI to the database (i.e., “I want to order a book”). Therefore, agile developers create a GUI, middleware, database queries, database schema and infrastructure capabilities for each user story in isolation (i.e., autonomous user stories). A barely sufficient architecture is thus created, free from waste, gold-plating and unnecessary functions. This reduces costs, improves quality and maximizes customer satisfaction and business value.
- Case Study: A web developer was asked to build a non-enterprise, departmental-level system. His customer did not know what the requirements were in advance. His first action was to create a multi-year project schedule, which included long requirements, architecture and design phases. Instead, an agile coach advised him to build a rapid prototype of the system based on his best assumptions and demonstrate it to his customer once a week. His customer began feeding him requirements after each demo. The developer finished the system within 90 days to his customer’s satisfaction and began demonstrating it at the enterprise level. His customer was promoted and given a raise for the brilliant new system. The developer also received a substantial cash award for his efforts. He had no prior knowledge of agile methods. He knew how to stand-up an IT infrastructure to support his application, which was a critical success factor.
Just-Enough Architecture (Lean/Lightweight/Just-in-Time/Lean and Agile Architecture and Design): Just-enough architecture may be needed based on the complexity, scale or scope of the system. A rough blueprint of enterprise or system architecture is created instead of using emergent design (i.e., high-bandwidth communication lines, extraction-transformation-and-load functions, database schema optimized for volume and performance, middleware for interfacing to applications, business objects for data mining and reporting, GUIs for end-user operations, etc.). However, this shouldn’t be months, years or decades (i.e., maybe a few hours, days or weeks). The goal is to establish an architecture, design or technological vision and platform for moving forward that addresses a near-term customer capability. This could be done during release planning, appear as an explicit architecture phase or be performed in a few early iterations.
- Case Study: Database developers may refuse to evolve schemas in an agile and iterative fashion. They want to identify all customer requirements in advance. Then a large schema is developed over a long period of time. Once the perfect schema is done, it will never be changed. Their goal is to establish enterprise-level schemas versus individual departmental databases. However, enterprise databases can be developed in an emergent, iterative, day-to-day style. It is possible to create a just-enough, just-in-time, lean, lightweight and agile enterprise schema for enterprise purposes that evolves from iteration to iteration. Enterprise repositories may evolve in less frequent cycles, while departmental databases evolve more frequently. Just-enough enterprise schemas yielding near-term business value are both feasible and desirable in today’s environment versus. big, upfront, multi-year, multi-million dollar schemas containing waste to anticipate low-priority customer needs.
Product Owner Involvement (Product Owner/Customer/Coach/Mentor/Scrum Master Collaboration): Product owner involvement is a critical success factor. They must create project visions, charters and scope statements. They develop prioritized user stories based on business value, risk and other factors. They frequently visit teams, reinforce vision statements and answer questions on an as-needed basis. They may serve as agile project managers who guide teams with a light touch and heavy doses of emotional intelligence (i.e., people skills). Some view them as a single-wringable neck, but the better term is “project champion”. They must proactively monitor project performance, get the resources teams need, stave off ornery stakeholders, shield teams from politics, clarify user stories, review designs and evaluate demos. They institute other practices to ensure success and may serve as the chief architect to guide system design (when necessary).
- Case Study: On one agile project, developers spent about one-third to one-half of iterations clarifying requirements and discussing design alternatives. Some illustrated wire frames, screen shots and other rapid prototypes and spikes. This activity preceded detailed design, implementation, testing and demonstrations. Early design involvement worked better than arguing with developers to overturn design decisions made without earlier interactions, i.e., they were more willing to make design changes than implementation changes. Two system demos were instituted for iterations. One was for the product owner and the other was for the customer to ensure demos went off without a hitch. The customer was hyper-sensitive and would have terminated the project if the demo would have failed to satisfy their user stories. The developer’s success was the product owner’s success; likewise, developer failure was the product owner’s failure.
What’s the bottom line? Agile methods have just enough just-in-time emergent architecture and design practices for successfully creating products that satisfy customers and maximize business value. Emergent design minimizes effort, cost, waste, defects, schedules, poor morale, project failure and customer dissatisfaction. It is the antithesis of the traditional all-encompassing, top-down “big design up front (BDUF)” to prematurely anticipate unarticulated customer needs. Big upfront architecture and design is done to lower costs and increase system quality, although it has proven to have the opposite effect.
Emergent design is an efficient and waste-free practice for creating today’s complex systems. However, it is important to note that its success rests on other interrelated practices. Some of these include prior experience, release planning and iteration zero. Oftentimes, developers have inadequate experience, are unfamiliar with release planning or don’t realize the value of standing up an operational IT infrastructure. Sometimes, project leaders do not understand the importance of release planning, creating and communicating crystal-clear vision statements or the critical role they play in day-to-day design.
A few books that come to mind include Planning Extreme Programming by Kent Beck, Agile Estimating and Planning by Mike Cohn and Agile Project Management by Jim Highsmith. Kent’s book gives a 40,000-foot overview, Mike’s book provides step-by-step guidance and Jim’s book provides an overarching framework for adapting release planning to larger and more complex projects. Although there are a few textbooks available on emergent design, just-enough architecture and even lean architecture, the principles of “evolutionary design” as it applies to agile methods have yet to be fully articulated.
In spite of the promise of emergent design as it applies to agile methods, customers, project leaders, developers and other critical projects, stakeholders are not perfect. Essential non-functional requirements often go unidentified and undocumented. For instance, security, usability, performance, maintainability, scalability, reliability, availability, safety and other critical system characteristics often go undefined. Security requirements are essential for today’s petabyte-scale systems. Usability is at an all-time low and user-experience design examines the ecosystem of useful services surrounding individual computers.
Customers and developers are so consumed by trying to determine whether a system “can” be built successfully. Conversely, they rarely ask themselves whether they “should” develop the system. Just because we have the ability to store billions of photographs, credit card numbers, personally identifiable information records and other highly sensitive data doesn’t mean we should. The responsibility of creating complex, high-risk information processing systems has never been greater. Unreliable systems may anger customers, unsecure systems could cost billions of dollars and unsafe systems can cost lives.
Agile project management and its practice of emergent design are a powerful combination that enables developers and customers to successfully build and acquire complex systems that create business value for their enterprises. In the early days, traditional developers believed agile methods sped up development (i.e., shortened cycle time) at the expense of design, quality and long-term maintainability. Developers are now entering an era in the adoption lifecycle of agile methods where we have the evidence to show that we can achieve superior designs, ironclad quality and lower overall total lifecycle costs.
Dr. Rico has been a leader in support of major U.S. government agencies for 25 years. He’s led many Cloud, Lean, Agile, SOA, Web Services, Six Sigma, FOSS, ISO 9001, CMMI and SW-CMM projects. He specializes in IT investment analysis, portfolio valuation and organizational change. He’s been an international keynote speaker, presented at leading conferences, written six textbooks and published numerous articles. He’s also a frequent PMI, APLN, INCOSE, SPIN and conference speaker (http://davidfrico.com).
Yes, you're right, large scale change of enterprise level architecture and infrastructure presents a challenge, especially in today's networked world.
Information system and network architectures of large enterprises and organizations are amalgamations of incremental changes over long periods of time. They
are very sensitive, fragile, and subject to breakage with the slightest perturbation.
There are numerous firms today who specialize in helping monolithic enterprises design large, scalable, available, and reliable networks. The problem is that
most enterprises are burdened by their legacy infrastructures. Furthermore, they are unable or simply unwilling to change.
In other words, enterprises don't mind if you diagnose and optimize the performance of their existing information systems and networks. However, they don't
have the resources or patience to design a new infrastructure from an innovative top-down blueprint.
I guess what we're talking about is the difference between radical vs. incremental change. Radical change is expensive, long, difficult, and even
counterproductive. Michael Hammer urged us in the 1990s to "Obliterate the legacy system, rather than automate it!"
However, today, organizational psychologists realize that radical change is counterproductive. It is traumatic and results in good old resistance to change.
Therefore, the modern change guru tells us that iterative and incremental change is better and more successful.
Incremental change is easier, cheaper, less risky, faster, easier to validate, and involves less resistance-to-change.
Unfortunately, information system and network designers believe in designing the system right the first time to last for 100 years (at any cost). Then, of
course, traditionalists believe the only remaining challenge is to resist any changes to the basic architecture, design, and implementation.
Traditional project management and systems engineering paradigms have evolved to support this low-cost-of-change paradigm. That is, lock down the scope, WBS,
schedule, cost, etc. up front and then prevent changes to bring the project to completion in within a 5% to 10% margin of error.
There are too many things wrong with the traditional project management and systems engineering paradigms. It's impossible to know 100% of the scope, what
you do document is wrong, it results in too much waste and security vulnerabilities, and nothing is prioritized so no business value is obtained.
Worse yet, when something new of genuine value is identified, it is rejected, because it is out-of-scope. Therefore, the traditional paradigm is double
trouble. No initial value and no late value are ever allowed.
There is a concept in agile project management and agile methods called "evolutionary systems engineering," "evolutionary architecture and design," or simply
"refactoring." It is really not new; it comes from the old Plan-Do-Check-Act (PDCA) based TQM paradigms. We call it "Continuous Improvement," and it is well
embedded into both the lean and agile paradigms.
(It even goes back to Japanese culture, Kaizen, and the Toyota Production System, where perfection is obtained from a never-ending process of perpetual
Good Japanese manufacturers have known this for years. Process and product architectures are constantly reengineered and simplified with every product
revision. Fewer moving parts in both the manufacturing process and product architectures do wonders for business success. It reduces cost, waste, and
defects, and increases quality, reliability, speed, and customer satisfaction.
Plasma displays are an excellent example. In the 1980s, the processes were so complex, the manufacturing lines had low yields, high defects, and the costs of
individual plasma displays were very high. Now that manufacturing process and product architectures were slowly reengineered with each successive generation,
costs are low, yields are high, and reliability is off the scales.
(Cancer survival rates are higher in cities where doctors are more willing to try experimental treatments than in cities locked into 50 year old cancer
treatment regimens, i.e., the cure for cancer is not a singular miracle drug, but the willingness to change, try new things, and attempt a variety of
In agile project management and agile methods, we call this "refactoring." One must constantly reengineer, change, and simplify the process and product
architectures with each successive iteration. Nothing stays the same; nothing survives or is sacred from iteration to iteration. If it can be reworked,
simplified, and made better, than it can, will, and must be changed. This is true even at the enterprise level.
So, we have the empirical data to show that big up front design is impossible, expensive, and leads to failure. We also have the data to show that little or
no change also suffers from the same fate (i.e., adapt or die). Finally, we have the empirical evidence to show that not only is the cost of change lower
with "refactoring," but it agile methods reduce overall costs, increase quality, and increase satisfaction with each successive iteration.
Perhaps the mantra of the 21st century is "adapt or die," "change or die," "iterate or die," etc., rather than "prevent change or die" that the traditional
project management and systems engineering paradigms have carved into our psyches over the last 50 years.
Yes, change is psychologically hard, organizational change for large enterprises is immensely difficult, and quite frankly, the human race is not very good
at change (to-date).
However, we must break out of the traditional beliefs that change is bad and we must become good at change as individuals, teams, groups, business units,
enterprises, industry sectors, nations, and as a world community.
Our knowledge as a project management and systems development discipline with respect to the principles of "Evolutionary Architecture and Design" is
ridiculously meager. We're slowly coming to the realization that "Successful Architecture and Design is an Amalgamation of Renovations over Long Periods of
We must design our enterprises to accept change, refinement, evolution, adaptation, etc. as a way of life as both a psycho-sociological phenomenon as well as
a technological one. That is, we need agile enterprises as well as agile information systems, infrastructures, and technologies.
NASA is finally learning this lesson as it uplinks new software to its space station every few minutes, rather than designing and hard coding it 30 years in
advance. The Internet allows some of this flexibility as well, as Google changes the design of web services like Gmail every few minutes, without having to
install any new software in an enterprise's infrastructure (i.e., thin-lightweight-agile-replaceable vs. thick-heavy-locked-in-stone-irreplaceable).
Google can change the architecture and design of its web services on a dime, while 40% to 60% of worldwide personal computers still use Windows XP, a 10 year
old operating system, because the cost of change is perceived to be too high, when it's probably cheaper to upgrade to Windows 7 or 8 today.
We're getting there, be patient, it will come ...
Neil, David, WOW! Lots of great information in this discussion and it has my head spinning on what to comment on,thanks!
David, I would like to read your agile security articles as I believe you are correct that "establishing a minimalist framework" is ideal from a information security perspective and I'd like to read more of your thoughts in this area. Thanks in advance!
With respect to the discussion around the security of informational resources, the Business Model for Information Security (BMIS) published by ISACA in 2010 found that the lack of an effective culture of security "that supports the protection of information while also supporting the broader aims of the enterprise." impacts all aspects information security in both the public and private sectors of enterprises today. This study goes on to point out that while a culture of security always exists in every enterprise, it takes a lot of work to build an "intentional culture of security". I believe this is where the agile methods David has listed can be used to implement and focus on establishing a intentional security culture that will be able to deal with the change management challenges of implementing enterprise solutions as the comment by Neil alludes to. Defined defined as a pattern of behaviors, beliefs, assumptions, and ways of doing things that promotes security, "the culture of security does not create security, but true security cannot be created in the absence of a supportive culture." I believe security is intrinsic in nature and within an enterprise and security practitioners like myself can use agile methods to build a culture of security into development teams and enterprises from the bottom up to protect information in a sustainable, effective, value-added methodology; especially as threats and vulnerabilities evolve over time. Building an understanding of security as an intrinsic part of the culture of an enterprise should be a strategic objective for long term agility.
Thanks David. Comment as long as the original article - and would love to see more on this!
My current interest is introducing large-scale enterprise architectures to an operating business, with multiple vendor systems, custom systems, standards, and middleware in place. Our environment tends to favour BDUF architectural change - considering it the least of two evils. Why?
The current architecture (not well documented) incurs significant costs with each change - with multiple vendors we (they) are in change control heaven. Also with so many systems changes take a long time to ripple across the enterprise, meaning we are often working with systems that have/have not yet implemented a change. (For example, recent changes to reporting required new data to be extracted for management information, data of the expected standard was sourced and merged from different systems yet implementing the changes across all systems will take many months - some changes easy, while others have to wait for vendor release schedules a year out or more).
So Agile - with its iterative approach - is thought to throw up too many change requests, too frequently, and therefore will only lead to runaway costs, and increased risk that different vendors will implement changes incompatibly and then fight it out using the architecture as a football. We would be criticised as not knowing what we are doing because the architecture would be subject to constant flux (I know - this is a partly a communication issue, however vendor feedback unduly influences management's perspective on how well the IT folk are doing).
I am inclined to favour agile approaches to architecture development - purely because of the desire to deliver value early, and to be able to converge iteratively towards a practical and robust architecture based on actual feedback.
I would be interested in seeing future articles on this problem - how to introduce a more agile enterprise architecture to a complex (and running) enterprise while keeping consequent costs under control. In particular, practical help in ways of thinking through the problems, reducing architectural complexity, or reducing knock on impacts as the architecture changes. I have some prejudices here, such as using information management / MDM as an implementation technique - however I'd appreciate a wider discussion on this, from experts, to widen my appreciation of what's possible.
This is particularly pertinent, as there have been a few high profile businesses (banks, telcos) recently who've had embarrassing and damaging system outages after upgrades, which I feel must have something to do with big-step changes to their architectures and complexity.
Security is a central issue for today's products and services (i.e., information systems). There are over 3 billion Internet users with four or five Internet-enabled devices each (or more). Users number in the billions for Google, Facebook, Yahoo, Amazon, banks, insurance companies, and just about every other enterprise. Gone are the days when we engineered a system for a few hundred users. Given this context, security has come to the forefront, few are educated or experienced in its practices, few methodologies directly incorporate security engineering practices such as threat modeling, secure architectures, secure technologies, secure coding practices, vulnerability testing, etc. Few engineers are currently using security engineering practices, vulnerabilities abound, and security breaches of major systems are rampant. It's a silent killer, because most enterprises do not divulge when their systems have been compromised out of fear, shame, and irresponsibility (i.e., enterprises don't want to lose existing customer bases and highly-sensitive market share).
To add insult to injury, some people want to "blame" agile methods for the current crisis and are asking us to fall back on 50 year old methodologies that are responsible for creating the current crisis in the first place (i.e., waterfall, military and government lifecycles, IEEE standards, ISO 9001, CMMI, PSP/TSP, etc.). One pervasive myth is that zero-defect methodologies result in zero-failure systems (i.e., 40 year old code inspections are the only security engineering practice one needs). If that were true, we wouldn't be having this discussion.
It's not just security. Usability and user friendliness are at an all time low, reliability and availability are evasive and unattainable goals for most small to medium-sized enterprises, and good user experience design is almost unheard of (i.e., system of systems perspective).
That being said, a lot of responsibility falls upon the shoulders of today's developers, including those using agile project management and agile methods. That is, systems developers have to create products and services that are secure, user friendly, and belong to a value adding ecosystem greater than the sum of the individual parts.
Therefore, today's developers (especially those using agile methods) must have clear experience with security, usability, reliability and availability, and user experience design (among other functional and non-functional requirements). Clear requirements (i.e., epics, themes, features, user stories, etc.) must be established and prioritized. Product and service architectures and technologies must be utilized exhibiting these properties (and more). Methodologies, practices, and tools should be established and utilized to maximize these properties.
Over 90% of today's products and services are what are known as "software-intensive systems." That is, over 90% of system functions are performed by software, not hardware. Software engineers have been saying that since the 1960s, but it has never been truer than it is today (or acknowledged). Therefore, it's imperative to take an internal (software-centric) view of product and service architecture and development. That is, if the code modules have not been designed using security engineering practices and vulnerabilities abound, then the software is very likely to be the source of the security incident.
Given this context, all of the principles described in this article still apply:
• Prior Experience: Experience with architectures, technologies, and practices for security, usability, reliability, availability, user experience design, etc.
• Release Planning: Visions, epics, themes, features, and user stories for secure, usable, reliable, available, value-adding ecosystems, etc.
• Iteration Zero: Early exploration and establishment architectures and technologies exhibiting properties of security, usability, reliability, availability, user experience, etc.
• Emergent Design: Implement only the system functions that are needed in priority order to minimize inferior security, usability, reliability, availability, user experience, etc.
• Just Enough Architecture: Establish just enough architecture, design, and technologies for security, usability, reliability, availability, user experience design, etc. (within the horizon of the near-term release plan).
• Product Owner Involvement: Utilized product owners, agile coaches, and scrum masters with experience designing systems that exhibit properties of security, usability, reliability, availability, user experience design, etc.
Remember, 80% of the value is in the first 20% of the requirements (the lion's share of the security vulnerabilities are in the other 80% of lower priority, unneeded requirements). We need to get out of the habit of gold plating our architectures to last 100 years. This leads to waste, delay, cost overruns, project failure, inferior system performance, unsatisfied customers, loss of market share, etc. Recent statistics show that over 95% of system functions are never used at all (i.e., Microsoft Office, Oracle, corporate intranets, etc.).
Building functions that will never be used increases the attack surface size, number of vulnerabilities, and, of course, number of security incidents. Microsoft's products have become increasingly secure over the last decade due to a variety of reasons. They install only the base features, they turn off unneeded features, and they create new products and services with fewer features. (Of course, they also employ security engineering practices such as threat modeling, secure coding, vulnerability testing, and engineering to the lowest level of security.)
Agile methods, in addition to sound security engineering practices, establishes a minimalist framework ideal for creating products and services that exhibit properties of security, usability, reliability, availability, user experience design, etc.
Just say "NO" to the historical paradigm of gold plating, over engineering, elongated requirements engineering phases, big up front architectures and designs, 15 to 30 year project lifecycles, cost and schedule overruns, poor system performance, and most of all, the development of a monolithic attack surface ripe and replete with ridiculously voluminous vulnerabilities leading to unmanageable levels of security incidents.
I guess I just wrote my next Gantthead article. I have dozens of article on agile security engineering practices. Feel free to email me and I'll send them to you ...
How would you tackle pervasive architectural needs such as security? On its own, security is not a separate user story. Yet it must be part of every function. And to be robust, security needs a clear architecture uniformally implemented across every function. Doesn't this need an iteration zero or early emergence?
I really appreciated the knowledge and detailed insights into these agile projects and the methodology used to generate success. I now have a clear view of how agile works for architecture and design and how I can contribute my prior experience and knowledge in this team context. I'll be studying this article as a reference for years to come! Thanks Dr. Rico!
George E Jones Jr, CISM CRISC CISSP