Background and Objectives
Queen Mary University of London (QMUL) identified the need for an independent review of its software testing processes, to align the testing function with industry best practice and optimise the effort being spent by their end users - focusing it on acceptance testing and allowing prior test levels to do the heavy lifting in finding functional problems. Prolifics Testing were selected to perform this piece of work, based on their experience in the UK Higher Education sector and past projects of a similar nature.
Our team carried out a testing health check, which entailed a comprehensive review of current practices, procedures, documentation and tools, including consulting with staff responsible for each of the major business applications in use within the university.
Investigation and Findings
Our investigation revealed that QMUL relied on a diverse array of complex, business-critical applications, primarily Commercial Off the Shelf Systems (COTS) with custom configurations and interfaces. However, it was found that there was no universally adopted standard for testing and no one team with the responsibility for testing and QA. The lack of an overall test strategy meant different groups were performing testing at different levels, with no overall view of the complementary nature of test levels in a pyramid model.
- Varied approaches to unit testing were employed, highlighting the need for a more standardised methodology
- Limited integration testing by development teams
- Requirements for better Testing and QA Resource Allocation
- A recognised need towards exploring enhancements for Regression Testing and Automation
- Insufficient performance testing on applications
Recommendations and Implementation
A primary recommendation was the establishment of a dedicated testing team, underpinned by a clear strategy and robust governance structure. This team would focus on:
- Implementing standardised testing processes
- Enhancing knowledge sharing across departments
- Developing a central repository for test assets
- Instituting regular performance testing, especially for student-facing applications
There were other recommendations to raise the profile of testing within the university and ensure it is done effectively and consistently, aligning with project and programme goals. It was agreed that these points would be addressed by implementing an organisation wide test strategy, to bring all testing best practice together.
As a follow-on piece of work, Prolifics Testing were asked to return and develop a test strategy for the institution, which coincided with the appointment of a new Test Manager to lead the testing function. Using the knowledge from interviews and documentation gathering from the health check, a test strategy was developed that would encompass all applications and would take into consideration different release types, in order to achieve a standardised approach across all applications and be able to reference a risk matrix, with accompanying process, documentation and best practice for each test phase relevant for the type of change being introduced.
Outcomes and Benefits
The implementation of these recommendations led to significant improvements:
- Improved Software Quality: Standardised testing processes have reduced the incidence of defects reaching production
- Increased Efficiency: A dedicated testing team has streamlined the testing process, making it more efficient and effective
- Risk Mitigation: Introduction of risk-targeted regression testing has significantly mitigated potential risks associated with software changes
- Compliance with Industry Best Practices: Adoption of industry-standard practices has aligned QMUL with the highest standards in software testing
This health check and subsequent implementation of a test strategy not only improved the software testing landscape at QMUL but also set a benchmark for best practices in software testing within the academic sector. Prolifics' expertise and tailored approach have ensured that QMUL is well-equipped to handle current and future software testing challenges efficiently and effectively.