Optimal Tester to Developer Ratios

Monday, 15 June 2020

The challenge on any software project is the need for quantifiable data on resources (how much, how many and how long) to be included in project plans in advance of the activities that utilise those resources. Estimation of testing staff numbers using ratios based on development staff numbers with a simple model appears easy and tantalising but what are the pitfalls?

One Size Fits All?

Unlike other industries, such as the construction industry, in which quantity surveyors look up information in tables and can accurately calculate costs, estimation can be more of an art than a science in software development.

There is no universal formula that can provide an estimate for the number of testers simply based on the number of developers. A number of factors (too numerous to cover in this article) will directly impact the testing effort required on a project.

Here are a few examples:

Software Life Cycle

in the world of Agile, close working, Continuous Integration and relatively small teams sizes may indicate a higher ratio of testers to developers compared to a larger V model or Waterfall-based project. For example, although a typical Agile project may comprise five development staff and two testers, it does not mean this ratio can be applied to a V model development of 20 developers implying 8 testers required based on a two tester per five developers ratio.

Application Type

This directly impacts the testing effort. A safety critical system in a specific industry will require more extensive coverage. Avionics industry software will require more stringent coverage to meet a standards such as DO-178B (Software Considerations in Airborne Systems and Equipment Certification) than say a mobile app to book a hotel room. For these critical systems the developer effort involved in unit testing will be proportionally higher than a non-safety critical system and the tester effort will also be higher.

Specialist Testing

Where figures for tester-to-developer ratios are concerned, these most likely will be based on staffing numbers involved for the duration of a project. If the software requires specialist testing such as security, usability or some form of compliance testing, this is likely to be required intermittently. This additional testing effort may not be factored in to a simple tester-to-developer ratio.

Risk Management

Ideally risks should be identified in advance, prioritised and agreed by the business, development leads and Test Managers. The risks to be addressed by testing should be known and identified. Two similar projects may have very different tester-to-developer ratios if they differ in the level of risk being addressed. The lower the risks being addressed, the greater the testing effort and hence the higher the tester to developer ratio (i.e. more testing effort needed).

Automation

Aside from the different skill set involved in Automated Testing, the level of automation will have significant impact. This is particularly true if the test strategy calls for a regression averse approach to testing and whether the resultant regression tests are manually scripted and manually executed or automated. Clearly with automation, there may be a lower ratio of testers to developers (less testing effort but a more specialised tester skillset).

Requirements

The more complex the requirements, the more testing effort required. This may also mean a more complex development effort. It does not necessarily follow that a complex set of requirements requiring more development and therefore more testing will result in a meaningful tester-to-developer ratio that can be relied upon for estimation.

Again, two identical projects with identical complex requirements may differ greatly if one is implemented poorly, with insufficient unit testing and with a higher number of bugs in the software compared to the same software better implemented, more thoroughly unit tested and with fewer bugs. The latter requiring less testing effort and resulting in a lower tester-to-developer ratio (fewer testers than the buggier software)

Type

The type of testing will significantly alter the ratio if say the application is more in maintenance mode, hence small changes are being implemented by developers result in a proportionally small amount of development compared to the testing effort including regression testing.

Tester tasks

Any ratio will increase subject to what the testers are actually doing. There are a number of factors that will result in a higher ratio of testers to developers, including:

  • Customer support activities, including handling support calls
  • Customer site visits
  • Supervising beta testing
  • Assisting development team and business owners in defining acceptance criteria
  • Degree of participation in User Acceptance Testing
  • Requirements inspections
  • Writing product documentation
  • Deployment or product roll-out
  • Tool support

So What is the Ratio?

Whilst ratios are always useful and may be viewed as an expedient and simple way to arrive at an estimate, there is a significant risk when one organisation uses another organisation’s ratios and then applies these to their project without regard to differences in technology, process maturity, and skill levels.

Part of the problem with identifying a reasonably accurate ratio is not just the reasons above but also the lack of data collected by organisations on this information. Thus, for many, it is not an exact figure that is sought but an assurance that the testing estimates are reasonably correct and there is a degree of confidence that the testing can be undertaken within the project time frame with the staff and budget allocated to this function.

So when looking for actual numbers (ratios), it is possible to search the web and glean information; for example according to the book ‘Microsoft Secrets’, Microsoft employs a 1-to-1 ratio of testers to developers.

A separate informal poll of participants from 29 organisations in a conference session found the most common ratio was one tester to three developers:

  • Minimum ratio was 0 testers to 1 developer
  • Maximum ratio was 1 tester to 30 developers
  • Most common ratio was 1 tester to 3 developers

My own industry experience of large complex real time systems was closer to 1 tester per 10 developers on V model and Waterfall developments (with team sizes of 10 to 30 developers). More recently, based on training courses I have run, the ratios articulated by testers on Agile development projects across a range of companies have been 1 to 2 testers for 5 or 6 developers (with maximum team sizes of 8).

So What Do We Do?

Much of the above discussion focuses on the pitfalls of using a tester-to-developer ratio as a crude tool to estimate the number of testers needed. So how might the testing effort be estimated?

One way forward is start collecting data for your own organisation; this could be on projects, releases or sprints in an Agile world.

Provided your organisation is not under constant change, the software lifecycle and associated processes are consistent, and the software being developed per project or release remains consistent in type of application, size and complexity then a number of factors may be constant from project to project. There is value in collecting information on the quantity and absolute time or percentage of a testers time spent on:

  • Requirements (or user stories) analysed
  • Test cases written manually or scripted
  • Test cases executed
  • Bugs found and severity of bugs
  • Writing bug reports and re-testing bug fixes

This data can be reviewed (say in project and release retrospective meetings) to identify variances from estimates, and estimates refined for the next release. Close engagement with the development manager is also required to clarify the consistency or not of the development from one release to another. This approach may then provide a rough estimation ratio for the testing effort within a particular organisation.

Implications For Annual Budget Cycles

The reality is that the testing team numbers may well have been decided before details of projects are yet available in any detail. A flexible approach may well be needed whereby the historical perspective based on collected data from the previous year is used to estimate the next financial year (with adjustments based on business view of the year ahead). In addition, any peaks or specialist testing may have to be addressed by outsourcing some of the testing and should be provisioned for in the planned expense budget for the next financial year.

Conclusion

In essence there is no silver bullet or panacea for estimating testing based on ratios of testing to development staff, and any organisation should question the validity of such an approach if offered such a solution.

Having said that, an awareness of the pitfalls (some of which I have described) and a systematic approach to collecting information may enable what I would describe as very localised tester-to-developer ratio to be used, pertinent and optimal only to that specific organisation based on data collected within that organisation.

Steve Helsby,
Senior Trainer and Consultant

Scroll to top