Performancing Metrics

Performance blog: End to End Performance Test Approach - Part 2

Tuesday, October 6, 2009

End to End Performance Test Approach - Part 2

During the test planning activities, gather the performance test objectives, calculate resource estimations and project timelines, and review the Architecture with individual team members and determine the types of tests required to test the application.

Capture the project specific information for each of the projects in the test plan as per the following template. This information will help us to co related the performance test results with system configurations. Performance test results will vary with system configuration changes.


Project Name
XYZ
Application Background
XYZ is an online web based application which is used by the customer to place orders related to various products. System currently getting upgraded from oracle 10g to oracle 11i
Type of Project:
Seibel Web application/ SAP GUI
Application Technology
dot Net, IIS web server, C #, VB
Hardware Platform involved and OS
App Server - windows XP, 2 GB ram, 4 cpu
DB server - Windows XP SP2, 3 GB, 10 CPU

Database
Oracle
Third party tools



Create a workload model which covers the list of scenarios identified for the performance testing along the SLA’s and the user load. No of Txns and No of Concurrent Users will be derived from the volumetric analysis


S. No.
Transaction/Script
Online/Batch
No. of Concurrent Users
Response time
No. of
Txns.

1
Scenario 1
O
9
< 10 secs
12
2
Scenario 2
O
4
< 1 secs 
12
3
Scenario 3
O
15
< 2 secs 
143
4
Scenario 4
O
4
< 13 secs
20
5
Scenario 5
O
2
< 4 secs 
20
6
Scenario 6
B
3
< 5 secs 
3
7
Scenario 7
B/O
1
< 2 secs 
4
8
Scenario 8
O
1
< 4 secs 
5


Identify the different types of tests required for testing the application based on the requirement analysis.
In Load test,  measure server response times to verify if the application can sustain expected maximum number of concurrent users and expected maximum size of the database.
In Stress test, measure server response times at varying loads starting from low load (low number of concurrent users), medium load (average number of concurrent users) through high load (expected maximum number of concurrent users until unacceptable levels of response times are experienced) to validate application's stability and validity.
In Endurance test, test the application for longer durations with half the average system load to detect the possible memory leaks in the system.
A detailed test plan canl be laid out using the information captured during the requirements gathering phase and share it with the development/Business team and take their inputs for the final approval. Test plan should include the following (but not limited to):
  • Scope
  • Test Approach
  • Test Objectives
  • Test Environment setup and requirements
  • Types of tests
  • Transaction mix
  • Workload Scenario
  • Identify Monitors
  • Scheduling ( Testing sequence , Test cycles)
  • Data setup (Data required by the Test tool, not the Application data)


Test Design & Execution
During the test design phase, validate the existing scripts and develop new functionalities based on the workload model and also validate and update the data required for the test environment identified during the test plan and also analyze script failures with the intent of finding their root cause so that we can debug our scripts effectively. We should also collect any application related errors found during the script validations and share with the development team.
During the execution process, first try to validate the scripts are pointing to the correct environment and performance metrics to be captured are properly configured in the environment. We should also validate load generator machines are up and working fine.
Each script should be run individually several times to validate that the script has been developed correctly. These tests may reveal performance problems that will need to be addressed.
Mixed load test can be carried out for the identified scenarios consisting of all transactions, online and batch, according to the workload mix discussed earlier. The load tests have to be run multiple times to ensure that the testing process is repeatable and also configure all the performance metrics in the load testing tool prior to the start of the test.

Result Analysis and Reporting
 Focus on analysis, monitoring, identifying bottle necks and proving recommendations, thus providing an end-to-end performance solution for the complete application.
Send the test reports   from various tests results with  conclusions based on those results, and also with consolidated data that supports those conclusions. And also do analysis, comparisons, and details of how the results were obtained.
At the end of each run of the Performance Test, a report should be produced. The test report should have comprehensive data collected from various sources presented in a single document.
For each of the test cases, the following response times should be reported: arithmetic mean, standard deviation, 90th percentile response time and other percentiles as necessary. In addition, each test case also report the total number of transactions executed, the time period over which the transactions were executed, number of errors and number of retries.
Collect comprehensive set of system data and tabulated in the test report for each run. The data that will be collected will include CPU utilization, memory utilization – system wide and per process, DB statistics.

No comments: