In my current context I’m required to use a certain tool for managing the test effort; you probably know the tool. I won’t mention it by name unless the vendor pays me to, but it’s the first on the list when non-testers choose to purchase a test tool.
My approach is to use this tool as a checklist manager and it works really well for that purpose IF you don’t over-engineer your use of it.
Forget Requirements – List Test Targets
This means avoiding the requirements management trap, that is, using the requirements section for anything but helping you to organize the test effort. A test target is something that needs testing. A requirement, user story, feature, benefit, risk, performance criterion, security criterion, etc.
Test targets are in checklist format and organized in a way that makes it easy to do the next step, create the playbook.
In my current project, I have device specifications (temperature operating range, voltage operating range, etc.) that are clear and unambiguous test targets; I’m lucky. We will employ the same concept when we need to test business processes though.
Forget Test Plans – Create a Playbook
Think of this section as designing/creating a playbook. When we get a new release – we do this. When we test this feature right here, we do that. You should organize the plays in the playbook according to the conditions that exist before you run the play.
Back to my current project – when we receive a device firmware update or upgrade – the set of tests (the play) that we should run to accept that new configuration should be easy to find.
The Test Lab is for Test Sessions
Think of this section as recording the results of test sessions. For any given release or time period, pull plays from the playbook, run the play, discover what more you can/should do, update the play book, run the new/revised plays, rinse, repeat, wipe hands on pants.
At the end of the session, upload your session record so that you have evidence of having tested. Revisions of the playbook are evidence of having learned something about the product and continual improvement.
Current project example again – the February 12 release firmware tests were run over a period of a few days by two engineers and their session record (a Word document) was uploaded into this section in the test management tool. We will create a new folder for the March 29 release and go back to the source – the playbook – to define the firmware test session that will be held at that time. We will not re-run the checklist in the February 12 release section.
DRY – Don’t repeat yourself. This works in code, and it works in checklists. Don’t organize your test targets by release since that limits your checklist’s value when used three years from now to test a new release or upgrade. If your list of test targets and your list of set plays look the same, then you really haven’t done any design work and you’re not helping your future self as much as you could be.
Ignore the temptation to put a release/time/date anywhere but as metadata for test sessions. Nothing else needs a date since they should survive every release of the solution until they are decommissioned. Test targets and the plays in the playbook are for all time. Well, at least as long as the solution itself survives. I would organize the test sessions
Forget you’re on a project. You’re creating the list of test targets and a playbook for all releases, for the duration of the solution’s useful life. Without this goal – why are you using this tool again?