Process/Project DAD - Distributed Application Development Process
DescriptionThe Distributed Application Development Process is the result of a commitment to continuous learning, refinement of experience and improvement of process as new ideas have been developed and new technologies have been employed. It is a collection of experiences and best practices that have been taken from real-world development engagements, providing development teams with access to shared experiences and a proven, repeatable process. The Distributed Application Development Process encompasses modern design principles and proven practices to facilitate the development task and provide developers with a blueprint for building robust and correct distributed applications.
Technically, distributed application development is based on a multi-tiered development architecture. In its simplest form, with two tiers, a distributed application is synonymous with a client/server protocol in which you use a set of rules that specify a behavior for two collaborating processes. In a client/server relationship, one process (the client) initiates the interaction by issuing a request to a second process (the server). The server process must await a request from the client and, on receipt of that request, performs a service and returns a response (or result) to the client. The server is capable of handling requests from multiple clients and is responsible for coordinating and synchronizing responses.
Although the technical definition of the client/server protocol is stated in terms of a relationship between software processes, a more popular definition describes the technical architecture that supports the software. Client/server architecture provides an opportunity to distribute an application across two or more computers that can be used most effectively to deliver departmental and enterprise-wide systems solutions.
In a three-tiered architecture (a distributed architecture), an additional layer (the middle layer) is used to accept, process and mediate requests from the client to the server. The middle layer serves to alleviate the processing of rules and decision logic from both the client and the server. This permits the construction of "thin clients" and the removal of processing logic from the server layer. The server layer then can behave as a source of raw information. Processes can be distributed to any one of the layers.
Distributed architecture is based on a network model in which processes can be distributed on any processor and any two individual nodes of the network are in a client/server relationship with any number of intervening middle layers. The heart of distributed architecture is based on the client/server pattern. The additional complexity comes from the design of the components for the middle layer, referred to as "middleware."
Distributed software technology has much in common with and is often served by object technology and software components. There are many points of synergy between these technologies, including a focus on real world modeling, as well as a focus on simplicity, reusability, extensibility, and productivity.
Distributed development is based on several key elements:
- Concurrent development of packages and components
- Reuse of software components (either built in-house or purchased)
- Cyclical and incremental development
- Release strategy
Distributed application development takes a concurrent rather than sequential development approach, effectively using iteration to show progress and manage risk. This provides a basis for more rapid development, with smooth transitioning from one stage of a project to the next as well as continuous delivery of staged results that are of value in the total solution. The use of this approach implies iteration with a checkpoint at the conclusion of each stage to validate the quality of stage results, as well as the scope of the next stage. In addition to concurrency across stages, a release-based strategy ensures that there are short intervals between incremental releases of tangible results. This ensures that the direction of the project can be adjusted dynamically to accommodate critical events and needs. Each of these elements supports the basic philosophy of distributed component development which is to have small teams (ideally 4 to 6 people) and deliver results quickly (4 to 6 months).
Distributed application development relies upon three distinct architectures:
- Two tier -- The client process runs on a workstation or personal computer that interacts with a server process which runs on a shared device that is accessed through a network.
- Three tier -- The client process runs on a client workstation that interacts with a server process which runs on a server device. The server device is connected to a host that provides services to the server device.
- N-tier -- The client process runs on any workstation; the server process runs on one or more distributed server devices. The middleware mediates all interactions between the various processes. Components and integration adapters allow access to various information sources.
Distributed implementations are based on multiple levels of complexity, all of which are characterized by the distribution of processing logic. The choice of hardware architecture, development tools, and the development approach is influenced by the distribution strategy that results from the business requirements and system characteristics. Distributed development strategies are defined in terms of distributing presentation, function, and data access logic across the client and the server. The presentation layer handles the user interface and formatting information for display. The business object layer executes the business logic and is based on core business objects. The data access layer handles data-related processing.
Distributed solutions rely, in part, on other technologies such as a graphical user interface (GUI) or Web-based User Interface. The user interface is the external view of the system and, by definition, is the basis for most users' judgment of the system. The presentation system, the development tools, and the skill of the designer define the functionality of the user interface.
Networking is a principal component of distributed solutions. Whether served by a local area network (LAN), wide area network (WAN), or Web-based global network, distributed architectures generally include a network that links workstations to servers and servers to hosts.
One or more databases will be present in almost all distributed systems. Increasingly, the use of intelligent databases that support some form of processing in the database itself is the norm. This intelligence takes the form of triggers or stored procedures that provide processing logic that is independent of the programs that access it. Data warehouses, data marts and intelligent databases augment those basic mechanisms.
The use of distributed database technology is also a common part of the design. Distributed database technology permits a single logical database to reside on different computers, linked by a network. This enables the database to behave as a single local entity, even though it is distributed, providing for economies in network traffic and improved performance.
Factors that measure the success of the implementation of distributed application technology are many and varied. They include:
- Usability. This includes improving user productivity and satisfaction with the system through easier access to and manipulation of data, consistency in the user interface, and access to data through end user tools.
- Reusability. This measures the capacity to reuse components developed for the application in other distributed applications with a common interface.
- Stability. Distributed technology is becoming more robust but is still relatively immature. Consequently, it is essential that the known weaknesses in this technology be acknowledged, understood, sought out and corrected through the design process to achieve stable applications.
- Consistency. The single greatest factor contributing to user satisfaction is consistency. Users expect that the same types of actions will occur in the same way each time they encounter them. The challenge is to provide a consistent work environment that ensures consistency across systems and tools.
- Quality. Quality takes on special meaning in distributed implementations, due to the complexities in distributing and upgrading applications across a network environment. Quality must be built in by applying a robust methodology and testing for quality at each checkpoint in the process.
Distributed technology is powerful; however, it is not a panacea for all situations. There are application characteristics that suggest a distributed application solution and some that suggest that the risk is too high. Some applications play to the strengths of distributed technology - some do not.
Strengths of distributed technology include:
- Graphical presentation to the users
- Network distribution of applications
- Integration of legacy or heritage applications
- Integration of purchased packages and components
- Robust application based on real-world business objects
- Frameworks and patterns availability to reduce development time
- Access to shared business data
- Complex or long update transaction cycles
- Distributed transaction update
- Very large database support on servers