Tuesday, 14 November 2023

Prolifics have been performance testing software applications, websites and mobile apps for over 20 years. Here are a few things we've picked up along the way: 


1. Good Outcomes Start with a Good Test Plan

Here’s how we approach planning, the most important bit to get right: 

  • Understanding Business Objectives: as a first step, it's vital to understand what the software aims to achieve from a business perspective. This involves communicating with stakeholders to establish performance goals and get them aligned with business outcomes. This includes understanding how the system will be used, who will use it and identifying peaks in demand.

  • Gathering System Requirements: Acquiring in-depth knowledge about the system architecture, technology stack, and infrastructure is needed, to define which toolset will be needed to drive the tests, plus how infrastructure monitoring will be managed.

  • User Behaviour Analysis: We conduct an analysis of user behaviour to identify the most common paths through the application (user journeys), along with peak usage volumes and expected growth in user numbers. This information is crucial for creating realistic load simulation models.

  • Defining Performance Criteria: We identify and define the performance criteria based on business objectives. Non-functional requirements may already be in place, but often they’re not. NFR’s feed into the analysis and results phase, when it’s then clear whether performance criteria have been met by each of the tests.

  • Test Environment Configuration: The test environment should mirror the production environment as closely as possible to ensure accurate results. This involves specifying the right hardware, network configuration and any other attributes, to get the test environment as close to production as possible.
     
  • Tool Selection: Choosing the appropriate tools is fundamental. We select tools that can simulate the expected load and provide detailed analytics. Typically we use JMeter for web applications, using the Prolifics accelerators and pre-built environments. Where more complex applications and thick clients need to be tested, we will typically use either opentext LoadRunner or Tricentis NeoLoad.

  • Test Scenario Identification: With all the information in hand, test scenarios to measure the behaviour of the system will be identified, to utilise combinations of scripts and data to test the system. Examples include normal, peak, stress and soak test, all with different objectives. 

Through the planning phase, we lay the groundwork for a successful project. An important part of planning to highlight is also to gain consensus on approach, scope and volumes with our customers. 


2. Test Data: The Unsung Hero of Performance Testing

Test data is a critical, yet often underappreciated, component of performance testing. Many organisations we engage with are taken aback by the sheer volume and intricacy of data required to conduct meaningful performance tests. Here's why it's so pivotal: 

  • Realism Through Volume: To simulate real-world conditions accurately, it's not sufficient to use just a handful of user accounts. A unique account will be needed for each virtual user to mirror all the concurrent interactions that occur in production. This approach ensures that our tests genuinely reflect the varied user behaviours and interactions that the application will encounter.

  • Depth and Diversity of Data: Each script we develop to emulate user transactions is backed by data representing a wide range of possible inputs. We don't just need a record for each user interaction; we need distinct data sets for every iteration. Having a database stocked with a representative number of records also contributes to the accuracy of the tests. 
  • The Challenge of Single-Use Data: Often, the data we use in testing can be single-use, meaning once a virtual user performs a transaction, the data cannot be reused in its existing state. To overcome this, we’ve employed functional automation tools to replenish or reset data, ensuring that each test is as authentic and informative as the first.

  • Data Management Strategies: Effective data management is central to our performance testing regime. We've honed the practice of backing up data when it's in the 'right state', enabling us to reset the testing environment quickly and efficiently for multiple test runs. This practice saves significant time and resources, allowing for repeated testing without the need to recreate test data from scratch.

  • Preserving Data Integrity: We treat data with the utmost care to maintain its integrity throughout the testing process. This involves establishing protocols for data handling, storage, and backup, ensuring that the test data remains a reliable asset for the duration of the testing activities. 


3. The Importance of Correlation

Correlation in performance testing is the process of ensuring that dynamic values, such as session IDs and security tokens, are captured and correctly used throughout the test to mimic the behaviour of real users. This is fundamental for achieving accurate and meaningful test results, as it guarantees that each virtual user interacts with the application in a unique way, just as they would in a live environment. 

Without proper correlation, performance tests can yield misleading outcomes. For instance, an application might appear to handle load exceptionally well, but this could be due to all virtual users being unintentionally funnelled through a single session, thus not truly testing the application’s capacity to manage concurrent, independent interactions.  

We place significant emphasis on sophisticated correlation. By meticulously handling dynamic data, we ensure that each simulated user's journey is as close to reality as possible. This includes the correct passing of session-related information from one request to the next, mirroring the stateful nature of human interactions with the application. 

The attention to detail in correlation also extends to the adaptability of the test scripts. As applications evolve, so do the patterns of dynamic data. Our scripts are designed to be robust yet flexible, accommodating changes in application behaviour without compromising the integrity of the test. 

Correlation is not just a technical requirement; it's a commitment to authenticity in performance testing. By mastering this, we provide our clients with the confidence that the performance insights we deliver are both precise and applicable, ensuring that when an application goes live, it performs as expected, without surprises. 

4. Performance Engineering: Shift Left 

Performance Engineering is a proactive approach to ensuring software performance that goes beyond traditional testing to integrate performance considerations into every phase of the development lifecycle, especially within agile environments.  

Performance engineering isn't confined to testing; it's woven into the fabric of the development process. From design and architecture to coding and deployment, performance is a key consideration, ensuring that the application is robust and responsive from the ground up. By integrating performance engineering within agile development pipelines, we ensure continuous performance feedback and improvement. This integration allows performance metrics to influence design decisions in real-time, fostering an environment where performance is as prioritised as functionality. 

We use infrastructure as code (IaC) to set up and manage environments in a way that's repeatable and scalable. This practice ensures that our performance testing environments are consistent with production, leading to more reliable results. Within our CI/CD pipelines, we implement automated gates that assess performance. Code changes that do not meet our stringent performance benchmarks are automatically flagged, ensuring high standards are maintained. 

The shift-left strategy means performance testing is incorporated earlier in the development cycle. This approach helps to identify potential performance issues before they become costly to fix, reinforcing the efficiency of the development process. In line with agile principles, we establish continuous monitoring and feedback mechanisms. These provide ongoing insights into the application’s performance, enabling quick refinements and helping to avoid performance regressions. 

Performance engineering is a discipline that ensures software is designed for optimal performance. By embedding it into the agile development pipeline, we create applications that not only function as required but do so with the resilience and speed that modern users demand. 


5. Reporting and Analytics: Matching Results Against KPIs 

In performance testing, reporting and analytics are not merely about generating data—they're about delivering clarity and ensuring results align with key performance indicators (KPIs). Our reports are crafted to align the results of performance tests with predefined KPIs. These KPIs could range from page load time and transaction response times to concurrency levels and resource utilisation. Matching results against these benchmarks ensure we're not just collecting data but actively measuring success against business objectives. 

The 95th and 99th percentile measurements provide nuanced insights into application performance under stress beyond what average response times can show. By focusing on these percentiles in our KPIs, we're targeting the experiences of nearly all users, ensuring that the application meets performance standards even at its peak. Showing these important measures in a visual form using charts and graphs always goes down well and helps decision-making. 

Reporting and analytics in performance testing are about translating data into business intelligence. By ensuring our reporting is aligned with KPIs, we turn performance testing into a strategic asset, driving continuous improvement and operational success. 

We're passionate about performance testing and have an excellent UK team. Our clients are often repeat customers - there is real value in what we do - it's no exaggeration to say that every performance test we've run has picked something up which has resulted in a better, faster and more resilient software application once the problems are fixed. From database indexes, licensing caps, load balancer configurations, non-optimised code, and over-complicated reporting queries, we've seen it all.

Contact us for a no-obligation quotation or just some advice on what might be needed. 

Jonathan Binks - Head of Delivery
Prolifics Testing UK

Scroll to top