6

Are there any substantiated recommendations how many testers a team should have per programmer? I'm more interested in opinions referring to an old-school development approach (no agile stuff) and larger projects. Sources are welcome.

h0b0
  • 641
  • 5
  • 14
  • 2
    `"an old-school development approach"` do you mean Waterfall methodology? – StuperUser Jan 05 '12 at 16:11
  • 3
    Please don't ask for opinions - that leads to discussion which is not what Stack Exchange does well. Instead ask for proper documented research. – ChrisF Jan 05 '12 at 16:11
  • 5
    In Agile projects I have been in the average was 3:1. In every Waterfall project I have been it was probably 2:4:1:10, (2 Project Managers to 4 Business Analysts who never get held accountable for incomplete requirements to 1 stressed out overworked Developer who never stays at the company for longer than a year to 10 whiny completely untechnical manual testers that constantly bombard the developers with inane questions) – maple_shaft Jan 05 '12 at 16:16
  • 3
    @maple_shaft, so a team of 10 developers would have 20 project managers and 100 testers? – Malfist Jan 05 '12 at 16:37
  • @Malfist Clearly I am exaggerating to make a point, but I was on one with 2 developers, 3 project managers, 6 BA's, and 8 testers. – maple_shaft Jan 05 '12 at 16:41
  • 2
    I guess it depends on what you mean by testers. GMail was in beta for a long time, and, therefore, has had a ratio of tester:programmer in the millions. – blueberryfields Jan 05 '12 at 17:19

2 Answers2

7

From Software Estimation by Steve McConnell (ch. 21.1, p. 237-238):

  • Common business systems (internal intranet, management information system etc.) - 3:1 to 20:1 (often no test specialists at all)
  • Common commercial systems (public internet, shrink wrap etc.) - 1:1 to 5:1
  • Scientific and engineering projects - 5:1 to 20:1 (often no test specialists at all)
  • Common systems projects - 1:1 to 5:1
  • Safety critical systems - 5:1 to 1:2
  • Microsoft Windows 2000 - 1:2
  • NASA Space Shuttle Flight Control Software - 1:10

The data in here is based on observations of organizations that my company and I have worked with in the past 10 years.

As you can see from the data, ratios vary significantly even within specific kinds of software. This is appropriate, because the ratio that will work the best for a specific company or specific project will depend on the developmental style of the project, the complexity of the software being tested, the ratio of legacy code to new code, the skill of the testers compared to the skill of developers, the degree of test automation, and numerous other factors.

Péter Török
  • 46,427
  • 16
  • 160
  • 185
  • +1 Just to clarify. These are *observations* of typical ratios, rather than *recommendations* of optimum ratios that have somehow been measured to be effective? – MarkJ Sep 28 '12 at 11:54
  • @MarkJ, yes, these are observations. And from the explanation quoted, I understand that general recommendations would not make much sense, as the ratio varies significantly even between projects of the same type. There is just too much difference between companies, teams and projects. – Péter Török Sep 28 '12 at 12:26
4

My opinion would be that it depends very much on the nature of the project, because some projects lend themselves to more automated testing. For example, a hosted accounting package does not need to content with large number of environments, and the types of tests that need to happen are complicated scripts involving scenarios provided by the business side.

On the other hand, if you're developing an Android app, you'll probably want people to physically test on a wide variety of common phones, which isn't easy to automate.

I've worked at two places that I felt got tester ratio right. My current company has no testers -- everything is an automated test, JUnit, RSpec, Selenium or Capybara. This works for our processes and culture.

A previous company had about 1 tester per maybe 4 engineers writing code. This worked well because scheduling worked out so that some of the tests would float between projects depending on what part of the cycle they were on and we'd end up with 1 per 2-3 coders while stabilizing the code.

Some of the QA was in India which was also nice. We'd finish doing stuff at the end of the day, and when we come in the next day, we'd have new feedback from QA.

Kevin Peterson
  • 429
  • 2
  • 5