Optimal Tester to Developer Ratios

Thursday, 05 October 2023

The challenge on any software project is the need for quantifiable data on resources (how much, how many and how long) to be included in project plans in advance of the activities that utilise those resources. Estimation of testing staff numbers using ratios based on development staff numbers with a simple model appears easy and tantalising but what are the pitfalls?

One Size Fits All?

Unlike other industries, such as the construction industry, in which quantity surveyors look up information in tables and can accurately calculate costs, estimation can be more of an art than a science in software development.

There is no universal formula that can provide an estimate for the number of testers simply based on the number of developers. A number of factors (too numerous to cover in this article) will directly impact the testing effort required on a project.

A major consideration is also that the IT industry is constantly changing – with some major shifts currently underway in AI, which is starting to affect how we answer this question.

Here are a few examples of factors that can affect the ratio:

Software Life Cycle

In the world of Agile, close working, Continuous Integration and relatively small teams sizes may indicate a higher ratio of testers to developers compared to a more traditional larger scale sequential or V model based project. This doesn’t scale when using a different model, because although a typical Agile project may comprise five developers and two testers, if this was applied to a V model project of 20 developers, it would imply 8 testers being needed using this ratio, which is probably too much.

Application Type

This directly impacts the testing effort. A safety critical system in a specific industry will require more extensive coverage. Avionics industry software will require more stringent coverage to meet a standards such as DO-178B (Software Considerations in Airborne Systems and Equipment Certification) than say a mobile app to book a hotel room. For these critical systems the developer effort involved in unit testing will be proportionally higher than a non-safety critical system and the tester effort will also be higher.

Specialist Testing

Where figures for tester-to-developer ratios are concerned, these most likely will be based on staffing numbers involved for the duration of a project. If the software requires specialist testing such as security, performance or some form of compliance testing, this is likely to be required intermittently. This additional testing effort may not be factored in to a simple tester-to-developer ratio.

Risk Management

Ideally risks should be identified in advance, prioritised and agreed by the business, development leads and test team. The risks to be addressed by testing should be known and identified. Two similar projects may have very different tester-to-developer ratios if they differ in the level of risk being addressed. The lower the risks being addressed, the greater the testing effort and hence the higher the tester to developer ratio (i.e. more testing effort needed).


Aside from the different skill set involved in Automated Testing, the level of automation will have significant impact. It will also be affected directly if test automation is employed by the development team at the unit level, using a shift left approach to doing more testing early in the lifecycle, using a Software Development Engineer in Test (SDET) approach, which is a popular and beneficial approach.

It also applied to tests at the system level, by the test team. Automation has come a long way, is more accessible to more people and is increasingly being improved with AI features to facilitate easier automation, including visual navigation, seamless object recognition and self-healing scripts. All of these elements have always taken up a significant amount of time in the test automation function, none more so than maintenance, in line with the system under test.

There seems little doubt that with improvements in testing tools, using AI and Machine Learning will increase the efficiency of the testing team and potentially mean less testers are needed, eventually.

It could also be argued that AI will be able to improve the quality of developed code at source, so actually the number of defects reaching the test team will be lower anyway, but we are still a little way from that.


The more complex the requirements, the more testing effort required. This may also mean a more complex development effort. It does not necessarily follow that a complex set of requirements requiring more development and therefore more testing will result in a meaningful tester-to-developer ratio that can be relied upon for estimation.

Again, two identical projects with identical complex requirements may differ greatly if one is implemented poorly, with insufficient unit testing and with a higher number of bugs in the software compared to the same software better implemented, more thoroughly unit tested and with fewer bugs. The latter requiring less testing effort and resulting in a lower tester-to-developer ratio (fewer testers than the buggier software).


The type of testing will significantly alter the ratio if say the application is in production and being maintained, hence small changes being implemented by developers result in a proportionally small amount of development compared to the testing effort needed to verify both the changes made and the overall functionality of the system, via regression testing, though again this could be addressed via more test automation.

Tester tasks

Any ratio will increase subject to what the testers are actually doing - in many organisations testers have other responsibilities too! There are a number of factors that will result in a higher ratio of testers to developers, including:

  • Customer support activities, including handling support calls
  • Customer site visits
  • Supervising beta testing
  • Assisting development team and business owners in defining acceptance criteria
  • Degree of participation in User Acceptance Testing
  • Requirements inspections
  • Writing product documentation
  • Deployment or product roll-out
  • Tool support

So, what is the ratio?

Whilst ratios are always useful and may be viewed as an expedient and simple way to arrive at an estimate, there is a significant risk when one organisation uses another organisation’s ratios and then applies these to their project without regard to differences in technology, process maturity, and skill levels.

Part of the problem with identifying a reasonably accurate ratio is not just the reasons above but also the lack of data collected by organisations on this information. Thus, for many, it is not an exact figure that is sought but an assurance that the testing estimates are reasonably correct and there is a degree of confidence that the testing can be undertaken within the project time frame with the staff and budget allocated to this function.

So when looking for actual numbers (ratios), it is possible to search the web and glean information; for example according to the book ‘Microsoft Secrets’, Microsoft employs a 1-to-1 ratio of testers to developers.

A separate informal poll of participants from 29 organisations in a conference session found the most common ratio was one tester to three developers:

  • Minimum ratio was 0 testers to 1 developer
  • Maximum ratio was 1 tester to 30 developers
  • Most common ratio was 1 tester to 3 developers

Our own experience, from Prolifics projects as well as those of our customers and people we have trained, is 1-2 testers for around 5-6 developers, up to maximum team sizes of about 8.  For larger teams, the ratio tends to come down, as we have discussed in the article.

So, what do we do?

Much of the above discussion focuses on the pitfalls of using a tester-to-developer ratio as a crude tool to estimate the number of testers needed. So how might the testing effort be estimated?

One way forward is to start collecting data for your own organisation; this could be on projects, releases or sprints. Gathering good metrics is almost always one of the recommendations we make when doing process improvement work – knowing where you are is a vital ingredient in making testing great (again).

Provided your organisation is not under constant change, the software lifecycle and associated processes are consistent, and the software being developed per project or release remains consistent in type of application, size and complexity, then a number of factors may be constant from project to project. There is value in collecting information on the quantity and absolute time or percentage of a testers time spent on:

  • User stories tested per sprint, per release
  • Test cases per sprint, both formal cases and exploratory 
  • Test cases executed, how many found bugs
  • Test cases automated (can also be used to calculate RoI of automation)
  • Bugs found and severity of bugs
  • Writing bug reports and re-testing bug fixes

This data can be reviewed (say in project and release retrospective meetings) to identify variances from estimates, and estimates refined for the next release. Close engagement with the development manager is also required to clarify the consistency or not of the development from one release to another. This approach may then provide a rough estimation ratio for the testing effort within a particular organisation, as well as providing useful information on trends and when changes are made, the impact of those changes.

Implications For Annual Budget Cycles

The reality is that in many organisations, the testing team numbers may well have been decided before details of projects are yet available in any detail. A flexible approach may be needed whereby the historical perspective based on collected data from the previous year is used to estimate the next financial year (with adjustments based on a business view of the year ahead). In addition, any peaks or specialist testing may have to be addressed by outsourcing some of the testing and should be provisioned for in the budgets, aligned to projects.


There is no silver bullet or panacea for estimating testing based on ratios of testing to development staff, and any organisation should question the validity of such an approach if offered such a solution. It also may not take into account the up-front effort involved in activities that provide later benefit, for example the development of test automation frameworks for regression testing and non-functional testing. The AI revolution will increasingly be a factor that drives further change, as technology in both software testing and development advances.

Having said all that, an awareness of the pitfalls (some of which I have described) and a systematic approach to collecting information should enable what I would describe as very localised tester-to-developer ratio to be used, pertinent and optimal to that specific organisation, type of application, industry and based on data collected within that organisation.

Jonathan Binks
Head of Delivery and ISTQB Trainer

Scroll to top