I had an interesting conversation today with a gentleman who actually seemed to feel that my Microsoft testing experience could be a hindrance to me in the future. It was interesting because he was the first person who ever had a negative leaning to that experience, but after continuing the conversation, not only did I fully understand where he was coming from, it led to lots of questions in my own mind.
First, some background. At Microsoft, testing has a level of power that is unheard of outside of in "the real world." While most test teams are kept in check by effective test leads and managers, some testers tend to abuse their power and use the bug database as a bully pulpit for design changes. On top of that, the process guides everything. There is a very rigid flow of specs, plans and test cases that have to be gone through for most items. The practical upshot of this power and process is that the test team becomes a laser-focused tool dedicated to beating the living shit out of your product and finding as many bugs as humanly possible in the short amount of time that they have.
However, there's a limitation to the Microsoft process, and it's a fairly major one...what happens if the project changes dramatically tomorrow? The Microsoft testing process is ideally suited to a waterfall development system, but does not adapt well to more iterative development methodologies. The level of prep-work that is done for a Microsoft-level testing scenario can turn out to be wasted effort after a single day of pair programming working on the core of the system during a Scrum run.
This led to the big question. We have agile development systems. We even have agile content creation systems. Where is our agile testing system, our Extreme Testing? Most testing systems are based on the work done by Kaner. What about Whittaker? While his work streamlines testing, does streamlining really make us more agile? The difficult part of all of this is that while the definition of the program can change, our duty towards the program remains the same: ensuring that it works. This does require at least some level of preparation and planning.
Personally, I shoot for more of a loose plan based on the milestone deliverables and weekly goals, and that tends to work quite well, but it still relies heavily on work done during previous weeks adding up for the end. This doesn't let me turn on a dime, but I can course correct fairly quickly. It's not like the pure MS methodology, where redirecting test is like turning a luxury liner so that it misses an iceberg...
Since I have a fairly QA-oriented audience, I pose the question to you...How do you keep your test department agile in the face of Extreme Programming methodologies?