Project leaders have a key role to play in innovation. It starts with giving teams the flexibility to be creative and develop unexpected results. Facilitating customer interaction helps teams explore what is actually needed and get in the mindset to deliver it. And stakeholders must be prepared for change and ambiguity, instead of predictability.
Understanding Performance (Part 3)
If you've read Part 1 and Part 2 of this article, you should understand the definition of--and the interdependencies between--latency, throughput and resource utilization. But how do you measure such nonfunctional requirements? And why should you even bother?
There are literally dozens of performance tests that you can execute to gather performance metrics. However, Part 3 of this article will focus on two testing strategies: load/stress testing and scalability testing.
The most common performance test executed against distributed systems is one where the workload is increased gradually until its meets the peak load. This type of test, usually referred to as load testing or stress testing, generally requires the tester to first identify the minimum or introductory load. He/she then defines the maximum or peak load, which usually meets or exceeds the SRS (Software Requirement Specification) requirements. Once both end points have been identified, the tester selects intermediary workloads that fall between the introductory and peak loads.
Since each workload ends up representing a cycle in your load/stress test, it's important that you only select between 1 and 5 intermediary workloads. Too few will not allow you to pinpoint bottlenecks, and too many will increase the duration of your performance tests for little or no value.
Since the purpose of load/stress testing is to measure the impact of an increasing workload on your system, it's critical that you do NOT (a) install a new software load, (b) tweak your system's configuration or (c) modify your network topology until you've fully executed each cycle of your load/stress test. Keeping all those factors constant is the only way to conclude a cause-and-effect relationship between a workload increase and a change in the system's performance.
While load testing allows you to identify performance bottlenecks and consequently improve your system's performance, it also lets you identify hard-to-find bugs that only manifest themselves under heavy loads. Once you've completed a first round of load testing (e.g. you've executed your tests with each workload) and created a benchmark, you'll most likely want to fix the more important software defects, fine-tune your system, re-execute your load tests and compare your round 2 results with your benchmark.
After you've gone through two or three rounds of load testing, bug fixing and performance tuning, any additional rounds will result in little or no improvement on your system's performance. It's now time to start experimenting with your system's configuration.
While load/stress testing requires you to record latency, throughput and utilization metrics while increasing the workload, scalability testing requires you to measure the same metrics while keeping the load constant and modifying the configuration of one (1) component at a time.
N-Tier architectures usually make use of DBMS systems, LDAP directories, application servers, Web servers and various other components. Consequently, anyone of these tiers may become a bottleneck in your overall solution. It's therefore important that you understand these technologies and know how to optimize their configuration to meet your specific needs.
Those of you responsible with configuring such sub-systems already understand the performance impact of turning on indexing in LDAP directories or setting the maximum amount of threads in a Web server. Those of you who don't shouldn't take this tweaking activity lightly.
Vertical and Horizontal Scaling
Another form of scalability testing includes experimenting with scaling your system vertically or horizontally.
Vertical scaling means adding more power to each server in your N-tier architecture. Certain distributed applications respond very well if you simply increase each server's processor, especially when it's CPU bound. On the other hand, other systems react better when you scale them horizontally (add extra servers to your existing clustered or load-balanced environment).
Determine which strategy works best for your specific architecture. This will be required for your capacity planning activity described later in this article.
You've done the load testing, experimented with different system configurations and tried scaling your system both vertically and horizontally. Now what?
As mentioned earlier, performance testing doesn't only let you improve your system's latency, throughput and resource utilization. It also allows you to discover critical defects that do not manifest themselves under an average load. For example, certain components throw exceptions or stop responding altogether when they over exceed their available resources. Identifying and fixing such defects should be your priority, even when executing performance tests.
That being said, your second priority should be increasing your system's performance.
The simplest way to improve the performance of a distributed system is to scale it horizontally or vertically. However, as I'm sure you already realize, this solution is not necessarily economical. It's therefore important that you fine-tune your application before attempting to throw more equipment at the problem.
One way to improve performance is to modify the application's logic. If you carefully investigate the bottlenecks, you could for example notice that your system is generating way too many get-and-set messages, which increases network latency and resource utilization. You could therefore combine some of the set messages, and cache some of the information that you frequently get and seldom change. You may also notice that some of your components are taking too long to respond. You might therefore want to take full advantage of indexes or stored procedures to accelerate queries and gets/sets.
When performance tuning your system, keep in mind the law of diminishing returns, also known as the Pareto principle, or the 80/20 rule. Otherwise, you might quickly end up wasting scarce resources.
Finally, once you've fixed all the major defects, fine-tuned your system and optimized its performance, it's time you carry out capacity planning. This activity, which consists of deciding how much computing resources your customers need given their expected workload, is not actually part of performance testing. However, I find it wraps up performance testing nicely since it ties everything back to your customers. After all, isn't this whole performance testing activity for them?
Want more content like this?
Sign up below to access other content on ProjectManagement.com
Already Signed up? Login here.