There's been a disturbing trend in the gaming industry lately. More and more publishers are scaling back their test organizations to only having a small quantity of test leads, and then hiring on tons of contingent testers in the final eight weeks of a project. This is a bad idea.
On paper, it seems to make sense. You can get a contingent tester for about $320/week on average, while an experienced tester with three years experience is going to run you $600/week, and your test lead is going to run you just over $900 a week. (For the sake of this argument, we'll measure coverage by contingent manweeks, and consider a test lead and a regular tester as two contingents for purposes of measuring productivity. These figures also assume that nobody will ever be working overtime, which as we all know, is completely unrealistic, but the numbers would get a lot scarier if we factored in the OT.)
The way that some companies work is that they have a test lead on the project from the beginning, add three regular testers to the project about six months from ship, and then bring on about six contingents during the last four months. On a one-year project, that works out to about $115,000 for 324 contingent manweeks of testing. Under the "new" approach, a test lead is assigned six months prior to ship, and about 34 contingents are brought on eight weeks before ship. This amounts to about $111,000 for the same number of contingent manweeks, or a savings of about $4,000 on one project.
The "savings" scale the longer the project is going on. Assuming a two year project, given the above rules, the contingent spending wins $144,000 to $163,000 if you bring in 47 contingents to match the manweeks.
So on paper, these sorts of decisions make sense from an accounting standpoint, until you factor in two items: facilities costs and software development.
First off, it's significantly cheaper to come up with space and equipment for ten people than it is for 35. Space, power, computers, consoles, TV's, monitors, snacks, etc., it all adds up.
Second, let's say that after 324 manweeks on a project, your test team has found a total of 5,000 bugs. Under the initial system, those bugs would have been found and fixed throughout the development cycle. Under the "cheaper" system, all of those bugs would hit the development team at the very end.
Massive amounts of showstopper bugs at the end of a project lead to slips, and that's where the savings erode. For every week slip under the initial system, testing costs increase by $4,600 just for manpower. Under the contingent-heavy system, you're paying them all for an extra week, so your 34 contingents and your test lead just added an extra $11,800 to your project's cost. Larger teams, like the 47 CSG team mentioned earlier, add an extra $16,000 a week.
On a one year project, a one-month delay eats away all of your savings. On a two-year project, six weeks eats away your savings. Of course, some companies decide to ship and patch rather than slip, but that costs reputation and support costs as well.
So by scaling back your test organizations, what are you really saving?
(Edit: Added tag, fixed typo.)