SITS e:Vision Services

Whether it’s devising a comprehensive test strategy or carrying out a straightforward performance test, we possess a range of experience testing new and existing implementations of SITS.

Our services

Typical services we provide for clients using SITS include:

  • Test Automation
  • Performance Testing
  • Security Testing
  • User Acceptance Testing (UAT)
  • Data Migration & ETL

These services are provided on-demand, in a blended delivery model; our offshore test labs across India and the USA give us a truly global scale and the capability to achieve significant cost savings and efficiencies, while our onsite / remote consultancy work helps you to meet peaks in demand, staff shortages or demand for specialist skills.

Step-by-step guide: Performance Testing for SITS

At Prolifics Testing, we understand the pressure that is placed on Strategic Information Technology Systems during spikes in usage. These spikes typically occur as a result of:

  • Initial Applications
  • Offer Management
  • Clearing
  • Student Enrolment
  • Student Record
  • Student Finance
  • Accommodation
  • Module Registration
  • Room Booking
  • Online Results
  • HR Onboarding

Our consultants generate meaningful cloud-based load to simulate these key periods, so that issues can be identified in advance and IT departments can be confident that their applications will meet the needs of the business. We often do this before the launch of a new service.

The below guide is based on our experience of testing SITS e:Vision, an off-the-shelf package developed by Tribal Group and used by many UK universities. The guide focuses on a test for Module Registration and outlines the process we went through, from initial planning to delivery.

Step 1 – Planning

The first step is always planning. Our team worked with the project team to define the scenarios to test, as well as the expected numbers of concurrent users. For the Module Registration process, this involved identifying three typical combinations of courses and associated modules, making up diets – representing low, medium and high complexity.

Step 2 – Creating Scripts with Apache JMeter

Once agreed, we then created three different scripts, one for each diet. Each needed data – this was carried out by the university in the test lab, after obfuscation of data had been run, in order to protect sensitive personally identifiable information.

The following steps were scripted, using Apache JMeter:

  • Login
  • Navigate to Module Registration from the Homepage
  • Select the Modules of choice for the course selected
  • Confirm Module Selection
  • Submit Choices
  • Logout

Step 3 – Parameterisation, Correlation, Assertion & Timers

Once initial scripts had been recorded, the following activities were undertaken in order to enable the scripts to run as real users:

  1. Parameterisation – hook in to data items so that each run can select different data items from a spreadsheet of data.
  2. Correlation – items that need to be dynamically inserted into running scripts were identified so that the scripts would run successfully, for example session IDs.
  3. Assertion – Assertions help to define success criteria of the particular request by matching the returned values with the expected values to make sure our scripts are working fine and doing what is expected.
  4. Timers – think times were inserted and made variables in order to cater for variable think times to more accurately simulate user activity.

Parameterisation

The process of replacing the hard-coded user defined values with the variable is called parameterisation. As we used different student records from SITS during registration, we parameterised all of the data which should be filled by the students in real time:

Correlation

Correlation is one of the most important concepts in performance scripting, which deals with the dynamic values generated by the server, like session ID, request verification tokens etc. Some of the main correlating values used were:

  • JsessionID
  • Nkey
  • Isscode

As well as these, we also correlated Module Code to take the advantage of passing random module instead of always using the same one value.

Assertions

JMeter includes a valuable feature called assertions. Assertions help to define the success criteria of the particular request by matching the returned values with the expected values to make sure our scripts are working fine and doing what is expected.

For example, we were trying to check for the text “Thank you for registering your modules”. When the scripts executed, the request will be marked as ‘Pass’ only if that text is included in the response, or else the request will fail, which means the user received an unexpected page that will need to be debugged, based on the server response.

Timers

This element of JMeter allows the simulation of real-time behaviour in terms of the request delays to the server, as all actual users of the application will not perform the actions at and in the same time when it is made live.

Uniform random timer is one of the timers, which delays all the requests with the random time interval between the given range. For example, before clicking on Confirm Modules, constant delay offset was set to 10,000 ms and the maximum delay as 5,000 ms, meaning the requests will be delayed randomly between the time range of 10 to 15 sec.

Step 4 – Test Execution

Once the scripts were completed, test scenarios were designed to execute them with given/required number of users. In this test, we executed several simulations: Debug Test, Normal Load Test, Peak Load Test, Stress Test and Soak Test, which have different objectives to achieve:

  • Debug Test: to make sure that all systems and data are set up and working correctly including monitoring
  • Normal Load Test: simulates the system working on any typical day/hour. It acts as a benchmark for response times when the system is operating under typical conditions
  • Peak Load Test: to test the peak hour usage of the application
  • Stress Test: to identify the limits of the system under test
  • Soak Test: to see how the system copes over time, particularly in managing resources at normal operational volumes with spikes in use up to peak load levels

We also normally recommend some manual exploratory testing is conducted at a given point in the test. This is oriented around features not scripted and in particular areas which may be of particular risk such as running reports or other queries that may be database intensive.

Step 5 – Listeners

Listeners help to view the results of the test execution in different formats such as tree, table, graph or log file. There are built-in and plug-in listeners available in JMeter. For example, we used an aggregate report to show the response times and the error % from the data obtained from the test. We found that the average response time of the first request was >30 Sec, which was highlighted to the project team to confirm the SLA for that particular transaction.

Step 6 – Results

Once the tests were run, data analysis was carried out on the results. The images below show how the data is displayed in JMeter after the test execution.

Based on the results:

  • We identified specific transactions where response times were spiking under load, giving areas of the system for possible improvement
  • We correlated response times against server and network resources to identify bottlenecks
  • Several changes were made to the infrastructure, with tests re-run after each change. The modifications included hardware configuration and database indexes

Once the tuning had been completed, a final test proved the level of concurrency the system would be able to support, as well as the anticipated response times from each of the transactions tested.

Find out more

If you would like to learn more about how our SITS testing services can benefit your university, or receive a free Proof of Concept exercise, please contact us below.

Want to download our brochure?

Almost there!

...Our experience was excellent, the team overcame challenges with data, using automation to generate large data sets to be used within both automation and UAT. A regression pack was developed, covering both the web interface and the SITS client to automate our core processes, including the vital Online Application Form using their Quality Fusion platform. The team worked flexibly and within our timescales to achieve their objectives and provided valuable support to the programme. We continue to work together, expanding our automation packs and customising Quality Fusion functions for SITS.

Suki Samra, Test Manager, Arden University

Scroll to top