Friday, 14 April 2023
Digital transformation is a vital part of improvements in service and efficiency for customers in Higher Education. It goes hand in hand with migrating systems to Cloud, system upgrades and changes to integrated applications and middleware. The competitive forces around attracting the best students remain strong in the sector, with IT continuing to be a key enabler and providing an important first impression to potential customers.
When implementing or upgrading a new student records system there is a lot to consider. Top of the list are the customisations needed before systems can be implemented, to allow new software to fit into university business processes. This is often a major factor influencing time and cost calculations for projects, but particularly so in the Higher Education (HE) sector. We are seeing more institutions keen to adjust processes to systems, to try and alleviate additional up-front costs and make upgrades easier. Software providers are doing the same from the other side, providing systems that more closely match common processes, with more configuration possible to assist, reducing the need for customisation.
The first element that any test strategy should address is usually those elements of customisation which are most likely to harbour issues, rather than core functionality from the software supplier, which is usually stable and already in use by multiple customers. In-scope customisations feed into estimates for functional testing initially, followed closely by the configurations needed, to fit systems into the organisation. It is also important to assess how much testing is being carried out by the supplier and establish a level of confidence, which will influence how much functional testing is needed.
Location of infrastructure is an important factor; whether software is deployed on premise or Cloud. This decision drives the definition of a number of test phases, most notably integration, performance and security. To achieve maximum benefit, Student Information Systems require significant integration with other major business applications, including Finance, ERP, Virtual Learning Environments (VLE), Timetabling, Attendance, Security and more. Integration testing is vital, to ensure that the Student Information System can connect with other applications and that data can be exchanged and transformed between systems accurately and at the required volumes. The use of middleware and structured API messaging layers, for example Mulesoft make these interactions easier and more standard, but they do still need carefully testing. In our experience, integration testing of connected applications in the HE sector depends on the level of involvement of the supplier and / or in-house technical team.
Careful mapping of test data should be performed so that as many possibilities as practical are covered. Here at Prolifics we make use of our in-house tool, Effecta to assist with this process, setting up source vs target maps to allow large amounts of data to be exchanged between applications and verified. The availability of sufficient good quality data in the right state is vital for most test phases, which can be a highly time-consuming activity. There are various ways around this, including anonymising production data but also generating test data, which can be done via test automation and performance tools. Once generated, there do need to supporting procedures around it, to ensure longevity and the ability to restore and roll back to known baselines prior to running additional cycles of testing.
Regression testing should be a core component of the test strategy, including how manual tests are automated, using the correct tools and ensuring automated coverage is built up across a wide but shallow swathe of system functionality, to enhance confidence in the systems at an early stage.
Acceptance testing is always going to be a part of any test strategy involving a SIS. To maximise independence and use of business knowledge, this is obviously heavily dependent on user involvement and effective representation from different groups to represent the right range of system functions and business processes. This part of the test pyramid should not identify as many defects as previous phases and will rely on the business knowledge of the SME’s doing the testing.
Non-functional testing is an important part of any test strategy, particularly so with Student Records Systems. There is a need to cope with the known peaks during the academic calendar, as well and ensuring the security of significant amounts of PII (personally identifiable information) that is present in such applications. Non-functional elements of a test strategy are about eliminating these risks as far as possible, ideally in an environment similar in size and connectivity to production.
When considering a strategy, we take a top-down approach, working with stakeholders to identify risks and how they are to be mitigated by test phases. A test phase matrix of applications, plotted against release / change types (e.g. patch, change, upgrade) can then inform the organisation which test phases are needed and the parameters for each one – documentation, wiki links, tools etc.
It is important that any such test strategy is developed in tandem with customer teams, to ensure long term buy-in and agreement, so that value can be realised over time.
Please get in touch for tailored advice and help with software testing your Student Records / Information System.
Jon Binks - Head of Delivery