Monday, 09 March 2020
Student Record Systems (SRS) are the cornerstone of any Higher Education establishment’s IT infrastructure. With many more HE institutions offering their students increased online access to these systems for selecting course modules, viewing their personal information and obtaining their results, the user experience is more important than ever.
We've worked with many UK Universities to assist with specialist Performance Testing of their applications, including Online Registration, Module Registration and a simulation of results day, often before the launch of a new service.
This guide is based on our experience of testing SITS:Vision, an off-the-shelf package developed by Tribal Group and used by many UK Universities.
Here we focus on a test for Module Registration and outline the process we went through, from initial planning to delivery, and the methods we used to simulate expected usage on the application.
Step 1 – Planning
The first step is always planning. Our team worked with the project team to define the scenarios to test, as well as the expected numbers of concurrent users. For the Module Registration process, this involved identifying three typical combinations of courses and associated modules, making up diets – representing low, medium and high complexity.
Step 2 – Creating Scripts with Apache JMeter
Once agreed, we then created three different scripts, one for each diet. Each needed data – this was carried out by the university in the test lab, after obfuscation of data had been run, in order to protect sensitive personally identifiable information.
The following steps were scripted, using Apache JMeter:
- Navigate to Module Registration from the Homepage
- Select the Modules of choice for the course selected
- Confirm Module Selection
- Submit Choices
Step 3 – Parameterisation, Correlation, Assertion & Timers
Once initial scripts had been recorded, the following activities were undertaken in order to enable the scripts to run as real users:
- Parameterisation – hook in to data items so that each run can select different data items from a spreadsheet of data.
- Correlation – items that need to be dynamically inserted into running scripts were identified so that the scripts would run successfully, for example session IDs.
- Assertion – Assertions help to define success criteria of the particular request by matching the returned values with the expected values to make sure our scripts are working fine and doing what is expected.
- Timers – think times were inserted and made variables in order to cater for variable think times to more accurately simulate user activity.
The process of replacing the hard-coded user defined values with the variable is called parameterisation. As we used different student records from SITS during registration, we parameterised all of the data which should be filled by the students in real time:
Correlation is one of the most important concepts in performance scripting, which deals with the dynamic values generated by the server, like session ID, request verification tokens etc. Some of the main correlating values used were:
As well as these, we also correlated Module Code to take the advantage of passing random module instead of always using the same one value.
JMeter includes a valuable feature called assertions. Assertions help to define the success criteria of the particular request by matching the returned values with the expected values to make sure our scripts are working fine and doing what is expected.
For example, we were trying to check for the text “Thank you for registering your modules”. When the scripts executed, the request will be marked as ‘Pass’ only if that text is included in the response, or else the request will fail, which means the user received an unexpected page that will need to be debugged, based on the server response.
This element of JMeter allows the simulation of real-time behaviour in terms of the request delays to the server, as all actual users of the application will not perform the actions at and in the same time when it is made live.
Uniform random timer is one of the timers, which delays all the requests with the random time interval between the given range. For example, before clicking on Confirm Modules, constant delay offset was set to 10000ms and the maximum delay as 5000ms, meaning the requests will be delayed randomly between the time range of 10 to 15Sec.
Step 4 – Test Execution
Once the scripts were completed, test scenarios were designed to execute them with given/required number of users. In this test, we executed several simulations: Debug Test, Normal Load Test, Peak Load Test, Stress Test and Soak Test, which have different objectives to achieve:
- Debug Test: to make sure that all systems and data are set up and working correctly including monitoring
- Normal Load Test: simulates the system working on any typical day/hour. It acts as a benchmark for response times when the system is operating under typical conditions
- Peak Load Test: to test the peak hour usage of the application
- Stress Test: to identify the limits of the system under test
- Soak Test: to see how the system copes over time, particularly in managing resources at normal operational volumes with spikes in use up to peak load levels
We also normally recommend some manual exploratory testing is conducted at a given point in the test. This is oriented around features not scripted and in particular areas which may be of particular risk such as running reports or other queries that may be database intensive.
Step 5 – Listeners
Listeners help to view the results of the test execution in different formats such as tree, table, graph or log file. There are built-in and plug-in listeners available in JMeter. For example, we used an aggregate report to show the response times and the error % from the data obtained from the test. We found that the average response time of the first request was >30 Sec, which was highlighted to the project team to confirm the SLA for that particular transaction.
Step 6 – Results
Once the tests were run, data analysis was carried out on the results. The images below show how the data is displayed in JMeter after the test execution. The ‘Active threads over time’ graph shows the user load during the test for each of the script.
The ‘Response times’ graph shows how they are achieved over the test duration:
Based on the results:
- We identified specific transactions where response times were spiking under load, giving areas of the system for possible improvement
- We correlated response times against server and network resources to identify bottlenecks
- Several changes were made to the infrastructure, with tests re-run after each change. The modifications included hardware configuration and database indexes
Once the tuning had been completed, a final test proved the level of concurrency the system would be able to support, as well as the anticipated response times from each of the transactions tested.
Prolifics Testing is fast establishing itself as the ‘go to’ testing consultancy for Universities and Colleges, having tested systems for a number of prominent Higher Education institutions, including University of Leicester, Aberdeen, Bournemouth, Canterbury, City, DeMontfort, Dublin, Kingston, Loughborough, Maynooth, Nottingham Trent and Westminster.