What’s the Right Ratio Between QA Testers and Developers?

I get asked this question all the time.  And I think that the answer is both obvious and not at the same time.  In any case, it’s not possible to answer what the ratio of developers to QA testers should be.  Here’s why.

Let’s take a look at a flowchart of how software development really occurs.

I know that there are differences in this diagram based on whether we are using “waterfall”, “Scrum”, “Kanban”, and so forth.  But the differences are usually that we don’t explicitly acknowledge the stage of process that we are in, or we subsume it in another process step.  For example, in eXtreme Programming, we tend to design, code, and functionally test all in one step. We use Unit Testing in Test Driven Development as the functional test in isolation, and the “refactor” step in “red/green/refactor” as a way to accomplish “design a solution”.

Where are the testers in that diagram?

That is, by no means, a trivial question, and it does vary based on the operating model that we use for software development.  In traditional waterfall development, we usually have testing occur by role.  Developers are typically assigned the responsibility to functionally test what they code, usually called “unit testing”.  In fact, most of what I’ve seen in the field is that those “unit tests” are usually simplified functional “spot tests” that never make it into any regression suite, rather than the eXtreme Programming type of unit tests used in Test Driven Development.  These spot tests are typically fast, throwaway, and unrepeatable.  The people in QA sometimes are called upon to do this testing.  The best that can be said is that we saw some version of code that made the code functional for its intended purpose at the point at which point the test passed.

Traditional waterfall type testing does employ QA testers at the “non-functional and regression tests” stage.  QA will typically write long test plans intended to ensure that functionality not only meets the functionality desired, but also does not have an adverse impact on either previously developed functionality or non-functional aspects of the software, such as speed, capacity, etc.

In a traditional world, market tests are typically not done by either developers or QA personnel.  That testing occurs only once the product has been released to the marketplace.  The testing occurs by the customers of the product.  Unhappily, the results of that testing show up as missed market expectations, and occur after any hope of fixing the software would be possible, as the product is now in the marketplace.

If we are looking for better quality, both in terms of assuring that the software is written correctly, as well as being the right code to solve the problems that customers need a solution for, you’re going to have to do two things.  You’re going to have to push testing forward, so the feedback loops occur quicker. You’re going to have to automate as much of the testing as possible, so that you can iteratively attack the problem, and not have huge amounts of labor to fix a product that is not on course with customer expectations.  I can’t count the number of times I’ve seen product teams cut out the software testing done after the code was developed so that it could be delivered on a date previously promised. They did it because the labor and time needed to exhaustively regression test the code weren’t on the timelines that were committed to.  Delivering buggy code to your marketplace is an excellent way to help your competitors capture your market. So is delivering on the wrong solution.  This means that shortening the cycle time for this diagram is critical to make this happen!

This diagram is way too simplistic

Absolutely!  In Lean terms, it is a single stream flow of one piece of required functionality within a product that a businessperson knows must be presented to the marketplace as a “minimum viable product”, a “minimum marketable feature”, or even something smaller.  Depending on the process that you use for software development, you may be tempted to have as many of these functionality implementations going on at once.  But that increase in WIP (work in process) causes its own set of problems.  If one piece of functionality is being developed that depends on another piece of functionality code, then dependencies start to rear their ugly head.  Having one piece of functionality take longer to develop than originally estimated can have compounding and confounding ill effects on the code depending on it.  The same is true of a solution that doesn’t “meet the mark” on the non-functional and integrated regression testing aspects.  Note that by waiting until “we are ready to release” before we start this type of testing, we potentially grow an enormous WIP of unreleased code.  Any failure at this stage has a lot more code to fix than if we were able to stay with an ideal single stream flow.

Flow and limiting WIP are both important aspects of Lean business. If you’re still uncertain how Agile and Lean complement each other, read our white paper “When You’re Agile, You Get Lean.”

Read now

“Rules of thumb” are meaningless

You can find many rules of thumb for the ratio of QA to developers if you do a Google search with the words in the title of this blog entry.  You will find people talk about 10 developers to 1 QA tester, 3 to 1, 1 to 1, and many others.  My feeling is that none of these can possibly be correct.  They can’t be right, because they don’t take into account the abilities of both the developer and the tester.  Highly capable developers may be 10 or more times quicker at producing the same code as less capable team members.  The same will hold true for QA testers.  I had a conversation with Fred George a little over a year ago on this topic, and he recounted an assignment where he observed a ratio of 6 testers needed to absorb the work of one highly productive developer.

Rather than going that rule of thumb route, I would urge you to consider getting closer to a single stream flow on individual things that the software needs to do and employ the “Three Amigos” model that George Dinwiddie explains in Better Software, November/December 2011.  Here, we get the BAs, QAs, and developers at the start to write automated tests that serve as the functional requirements for the work to be done.  If we keep the rate of production of these collaboratively-developed tests in line with the actual rate of production that satisfies the tests, we never have to fear that we have something off balance.  If we find, perhaps with a Kanban analysis, that we can’t produce enough tests to keep our developers happy with enough work to do, we may find that we don’t have enough BA types or enough QA types available for us, and can adjust accordingly.

And, yes, there will always be a place for QA exploratory testing on integrated code.  But that should be automated as well into regression suites that are automated and repeatable.

Flow is the most important thing to a business

You may be asking yourself at this point, “Yes.  I get it.  I need to reduce WIP, and not worry so much about rules of thumb to get software done.  But what about that test in the market?  How do we get better at that?”  The answer there is easy – release more often!  And that will take your continuous integration solution to a whole new vista – continuous delivery, and engender a whole new set of problems that are wonderful to have, such as “how quickly can may market absorb new features, and how can I get them to accept things in a more laminar flow fashion?”

Businesses that produce software do it for a purpose.  Usually a pecuniary purpose.  Understanding the reasons behind why excessive WIP is such a dangerous thing to have may put the onus on the business to see how to incorporate smaller product tests, in terms of releases to the marketplace, more frequently.

So, the right answer is?

Since there is no right answer to the question of “what’s the right ratio”, let’s invoke Kobayashi Maru sort of solution.  For those of you who never saw or have forgotten Star Trek II: The Wrath of Khan, the Kobayashi Maru was a simulation exercise that Starfleet put all of its officers through to test if they could save the civilians trapped in a disabled ship in the Klingon Neutral Zone.  Because of the constraints involved, no cadet at Starfleet academy had ever passed the test.  Even the legendary James T. Kirk failed the test twice, only passing it the third time by reprogramming the simulator.

We can’t win this battle for correct ratios between QA and developers by simple rules of thumb.  But we can fall back on the values and principals of Agility, just like Kirk did for Star Trek values and principles.  We need to focus on the team’s people and interactions (specifically, QAs, BAs, and developers) for as much work as possible up front to increase quality (the most efficient and effective method of conveying information to and within a development team is face-to-face conversation).  Keep WIP sizes small (working software is our primary measure of progress).  Test our code not just during development (continuous attention to technical excellence and good design enhances agility), but get it into the hands of customers quickly and often (customer collaboration).

Let’s change the conversation from one asking for rule of thumb ratios into one that asks for collaborative development, better quality, and faster market realization of smaller and smaller chunks of valuable software.  We can measure and find where WIP is causing resource constraints, and do the traditional 5 step Theory of Constraints solution to fix things.  In other words, let’s turn around the faulty question for an answer (that is bound to fail in practice) into a new quest to deliver, measure, and deliver more of what works.

Photo Source