Performance Testing Overview

Performance Testing Process

The methodology adopted for performance testing can vary widely but the objective for performance tests remain the same. It can help demonstrate that your software system meets certain pre-defined performance criteria. Or it can help compare the performance of two software systems. It can also help identify parts of your software system which degrade its performance.

Below is a generic process on how to perform performance testing

Performance Testing
  1. Identify your testing environment – Know your physical test environment, production environment and what testing tools are available. Understand details of the hardware, software and network configurations used during testing before you begin the testing process. It will help testers create more efficient tests.  It will also help identify possible challenges that testers may encounter during the performance testing procedures.
  2. Identify the performance acceptance criteria – This includes goals and constraints for throughput, response times and resource allocation.  It is also necessary to identify project success criteria outside of these goals and constraints. Testers should be empowered to set performance criteria and goals because often the project specifications will not include a wide enough variety of performance benchmarks. Sometimes there may be none at all. When possible finding a similar application to compare to is a good way to set performance goals.
  3. Plan & design performance tests – Determine how usage is likely to vary amongst end users and identify key scenarios to test for all possible use cases. It is necessary to simulate a variety of end users, plan performance test data and outline what metrics will be gathered.
  4. Configuring the test environment – Prepare the testing environment before execution. Also, arrange tools and other resources.
  5. Implement test design – Create the performance tests according to your test design.
  6. Run the tests – Execute and monitor the tests.
  7. Analyze, tune and retest – Consolidate, analyze and share test results. Then fine tune and test again to see if there is an improvement or decrease in performance. Since improvements generally grow smaller with each retest, stop when bottlenecking is caused by the CPU. Then you may have the consider option of increasing CPU power.

Performance Testing Metrics: Parameters Monitored

The basic parameters monitored during performance testing include:

Performance Testing
  • Processor Usage – an amount of time processor spends executing non-idle threads.
  • Memory use – amount of physical memory available to processes on a computer.
  • Disk time – amount of time disk is busy executing a read or write request.
  • Bandwidth – shows the bits per second used by a network interface.
  • Private bytes – number of bytes a process has allocated that can’t be shared amongst other processes. These are used to measure memory leaks and usage.
  • Committed memory – amount of virtual memory used.
  • Memory pages/second – number of pages written to or read from the disk in order to resolve hard page faults. Hard page faults are when code not from the current working set is called up from elsewhere and retrieved from a disk.
  • Page faults/second – the overall rate in which fault pages are processed by the processor. This again occurs when a process requires code from outside its working set.
  • CPU interrupts per second – is the avg. number of hardware interrupts a processor is receiving and processing each second.
  • Disk queue length – is the avg. no. of read and write requests queued for the selected disk during a sample interval.
  • Network output queue length – length of the output packet queue in packets. Anything more than two means a delay and bottlenecking needs to be stopped.
  • Network bytes total per second – rate which bytes are sent and received on the interface including framing characters.
  • Response time – time from when a user enters a request until the first character of the response is received.
  • Throughput – rate a computer or network receives requests per second.
  • Amount of connection pooling – the number of user requests that are met by pooled connections. The more requests met by connections in the pool, the better the performance will be.
  • Maximum active sessions – the maximum number of sessions that can be active at once.
  • Hit ratios – This has to do with the number of SQL statements that are handled by cached data instead of expensive I/O operations. This is a good place to start for solving bottlenecking issues.
  • Hits per second – the no. of hits on a web server during each second of a load test.
  • Rollback segment – the amount of data that can rollback at any point in time.
  • Database locks – locking of tables and databases needs to be monitored and carefully tuned.
  • Top waits – are monitored to determine what wait times can be cut down when dealing with the how fast data is retrieved from memory
  • Thread counts – An applications health can be measured by the no. of threads that are running and currently active.
  • Garbage collection – It has to do with returning unused memory back to the system. Garbage collection needs to be monitored for efficiency.

Example Performance Test Cases

  • Verify response time is not more than 4 secs when 1000 users access the website simultaneously.
  • Verify response time of the Application Under Load is within an acceptable range when the network connectivity is slow
  • Check the maximum number of users that the application can handle before it crashes.
  • Check database execution time when 500 records are read/written simultaneously.
  • Check CPU and memory usage of the application and the database server under peak load conditions
  • Verify response time of the application under low, normal, moderate and heavy load conditions.

During the actual performance test execution, vague terms like acceptable range, heavy load, etc. are replaced by concrete numbers. Performance engineers set these numbers as per business requirements, and the technical landscape of the application.

Pages: 1 2