Before running a performance test, you need to model your production workload accurately (also known as WLM), set up the test environment and equipment, establish a benchmark baseline for your tests, and so on.
An inaccurate workload model can lead to misguided optimization efforts in your production system, delayed system deployment, outright failures, and an inability to meet service-level agreements (SLA) for the system. Having the right workload model is crucial for the reliable deployment of any system intended to support a large number of users in a production environment.
To achieve a viable WLM you need to:
- Proactively monitor user and system activities and performance in your production environment.
- Identify symptoms of failure, including longer-than-acceptable response times (a reasonable SLA per your system dictates), application errors and unhandled exceptions, and system crashes.
- Create an accurate simulation of all use cases for your system.
Importance of WLM (Workload Modeling)
Your performance test results will be more accurate if you manage to properly simulate your production system parameters in your test. It’s the planning phase where the performance analyst makes sure that the information of all the parameters of AUT has been acquired in order to accurately simulate them in the performance test. Identifying AUT workload model is one of the most important parts of the planning activity. Workload model provides the information of what type of user actions will be tested under load, what will be the business scenarios for all the users and what will be users’ distribution on every scenario. This information helps the performance testing teams in many ways such as:
- Performance Scenarios Identification: The fundamental activity of the Workload model is to understand the application and identify its performance scenarios.
- Performance Test SLAs: Performance testing teams translate AUT non-functional requirements into performance test SLAs through workload model.
- Makes Communication Easier: Workload model makes it easy for the performance testing teams to communicate the AUT performance scenarios and users’ distribution on them with all the application stakeholders.
- Test Data Preparation: Workload model helps in identifying the type and amount of test data which is always required before the working on the tool is started.
- Required Number of Load Injectors: You always require a lot of infrastructures to successfully conduct the performance testing activity. Incorrect results are produced if the application is tested with inadequate infrastructure. Normally users load is simulated from multiple machines (i.e. load injectors) for accurate testing which is also identified from the Workload model
Activities involved in WLM (Workload Modeling)
Performance testing is a complex activity as it consists of various phases and each phase has several activities in it. Workload modeling is one of the most important parts of the performance testing activity and it’s not simple by any means. Some of the activities necessary for identifying the performance test workload model are listed below:
1. Test Objectives Identification
Not just the performance testing but in any activity you put efforts which are aligned to your objectives. Identifying your test objectives means to examine what actions you need to take in order to successfully achieve those test objectives. So before we formally start working on any application’s performance testing, first step is to identify its test objectives in detail. For an E-commerce web application, following are some of the examples of performance test objectives:
- Response Time: Product search should not take more than 3 seconds.
- Throughput: Application server should have the capacity of entertaining 500 transactions per second.
- Resource Utilization: All the resources like processor and memory utilization, network Input output and disk input output etc should be at less than 70% of their maximum capacity.
- Maximum User Load: System should be able to entertain 1000 concurrent users by fulfilling all of the above defined objectives.
2. Application Understanding
Complete understanding of the AUT with all its features is the basic step in any testing activity. You can’t thoroughly test an application unless you have its complete understanding. Same is the case with the performance testing. Performance testing starts with planning and planning starts with application understanding. You explore the application from performance perspectives and try to get the answers of the following questions:
- How many types of users are using this application?
- What are the business scenarios of every user?
- What is the AUT current and predicted peak user load for all its users’ actions over time?
- How the user load is expected to grow with time?
- In how much time a specific user action will achieve its peak load?
- For how long the peak load will continue?
- Understanding of Application architecture.
3. Key Scenarios Identification
It’s neither practiced nor required to simulate all user actions in performance tests due to the budget and time constraints. Performance testing teams always select limited number of user actions which have greater performance impact on the application. Following are the examples of few scenarios which should be selected while conducting the performance tests:
- Most Frequently Accessed Scenarios: Application scenarios which are mostly accessed by the users when they browse through the application.
- Business Critical Scenarios: Application core scenarios which contain its business transactions.
- Resource Intensive Scenarios: User scenarios which consume more resources as compared to typical scenarios.
- Time Dependent Frequently Accessed Scenarios: Such application scenarios which are accessed on specific occasions only but are accessed very frequently.
- Stakeholders Concerning Scenarios: Application features about which the stakeholders are more concerned such as AUT newly integrated modules.
Some of the most desired performance testing scenarios of an E-commerce application could be,
- Browsing product catalog
- Creating a user account
- Searching for a product
- Login to application
- Order Placement
4. Determining Navigation Paths of Key Scenarios
Once you have identified all the AUT scenarios which should be included in a performance test, next step is to figure out each scenario’s all possible paths which a user can opt to successfully complete it. Any application users most probably have different level of domain and technical expertise and it’s quite obvious that they will follow different steps to complete a specific scenario(s). Performance testing teams identify all possible paths which users could follow to successfully complete the identified scenario and also figure out the frequency of each path to decide whether it should be included in performance test or not? Application response for the same scenario can greatly vary depending upon user navigation path and it’s strongly advised to test the selected scenario’s all major paths under load. Following are the few guidelines which could be followed to identify any scenario’s navigation paths: