Rapid Test Case Prioritization

It is a common theme in software projects and testing in particular that there is never enough time to do all that you need to do. Given the limited time that you have available, how can you know that you did the best job testing? You know there are always defects left unfound when the application is released. For Testing, the objective is to minimize risks by improving product quality, and this is done in part by constructing a specific set of test cases to put the application through its paces and more.


IEEE Standard 610 (1990) defines a test case as:

  1. A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
  2. (IEEE Std 829-1983) Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.

Of course you will find it difficult to execute all your test cases on each build of the application during the project lifecycle. But how will you know which test cases must be executed for each build, what should be executed and what could be executed if you have time?

Prioritize Your Test Cases

Your application doesn’t have to be perfect; but it does need to meet your intended customer’s requirements and expectations. To understand the expectations for your project you need to determine what is important about your application, what are the goals and what are the risks.

Sue Bartlett discusses this exercise in detail in “How to Find the Level of Quality Your Sponsor Wants”. She comments in that article that: “When we do communicate quality goals ahead of the detailed planning, design or coding, we have a better chance to avoid a quality mismatch at the end. That means meeting schedules, covering costs, and making a profit will have a better chance of success.”

For the purposes of test planning, the organization and scheduling of your test cases for test execution in the context of your project’s build schedule will help achieve these goals. As part of this organization, we are concerned with the prioritization of individual test cases. Grouping your test cases by priority will help you to decide what is to be tested for each type of build and therefore how much time is needed. If you have a limited amount of time, you can see what will fit.

Ross Collard in “Use Case Testing” states that: “the top 10% to 15% of the test cases uncover 75% to 90% of the significant defects.”

Test case prioritization will help make sure that these top 10% to 15% of test cases are identified.

How To Prioritize Test Cases

How many times have you looked at your test cases and were able to easily pick out a small subset that are the most important? That answer is probably not often. It is really difficult to stop thinking that “all of these are equally important”.

When it comes to test cases, assigning a priority is not easy and is not necessarily static for the duration of the project. However, we can get started by constructing an example prioritization process to address the first-cut of prioritizing the test cases.

Let us assume that you have just finished creating your test cases from your functional specifications, use cases, and other sources of information on the intended behaviours and capabilities of your application. Now it is time to assign each test case a priority.

Test Case Priorities

First, you must decide what your types of priorities are and what they imply. For our purposes we will begin with an assumption that there is a parallel between the severity of a defect that we might find and the priority of the corresponding test case.

1 – Build Verification Tests (BVTs): Also known as “smoke tests” are the set of test cases you want to run first to determine if a given build is even testable. If you cannot access each functional area or perform essential actions that large groups of other test cases depend on, then there is no point in attempting any of those other tests before performing this first test case, as they would most certainly fail.

2 – Highs: These are the set of test cases that are executed the most often to ensure functionality is stable, intended behaviours and capabilities are working, and important error and boundaries are tested.

3 – Mediums: This is where the testing of a given functional area or feature is to get more detailed, and the majority of aspects for the function are examined including boundary, error and configuration tests.

4 – Lows: This is where the least frequently executed tests are grouped. This doesn’t mean that the tests are not important but just that they are not run often in the life of the project – such as GUI, Error Messages, Usability, Stress, and Performance tests.

We have chosen to group test cases into one of four categories: BVTs, Highs, Mediums, and Lows. The trick now is to figure out which test cases belong to which priority. After all, the priority will indicate which test cases are expected to be executed more often and which are not.

How To Go About Prioritizing

1) Arbitrary Assignment: These first three steps will leave you with an arbitrary grouping of the test cases, based on the idea that if you don’t have enough time to test at least make sure all the product requirements have been confirmed to do what they are supposed to under assumed good conditions. If you stop to think about what each test case is testing they all become important too, so just:

  1. Label all your Functional Verification (or Happy Path) tests as High Priority.
  2. Label all your Error and Boundary or Validation tests as Medium Priority.
  3. Label all your Non-Functional Verification tests such as Performance and Usability as Low Priority.

2) Promotion and Demotion: Not all the functional tests are as important as each other and the same is true for Boundary and Non-Functional Tests. Think about the importance of the test and how often you would want to check this functionality relative to others of the same priority – consider the quality goals and requirements of your project.

  1. Divide the Functional Verification tests into two groups of Important and Not Quite As Important.
  2. Demote the “Not Quite As Important” Functional Verification tests to Medium Priority.
  3. Divide the Error and Boundary tests into two groups of Important and Not Quite As Important.
  4. Promote the “Important” Error and Boundary tests to High Priority.
  5. Divide the Non-Functional tests into two groups of Important and Not Quite As Important.
  6. Promote the “Important” Non-Functional tests to Medium Priority.
  7. Repeat the divide and promote/demote process for each set of High, Medium, and Low Priority test cases until you reach a point where the number of test cases being moved between priorities has become minimal.

3) Identify Build Verification Tests: Now, which tests must be checked with every build to ensure that the build is testable and ready for the rest of the team to start testing?

  1. Divide the High Priority tests into two groups of Critical and Important.
  2. Promote the “Critical” High Priority tests to BVT Priority.

Note: Do not identify BVTs first! BVTs are a selection of High priority test cases that are determined to be critical to the system and testing

At the end of this process, a rule of thumb is to check that the percent distribution of the priorities is along the lines of BVTs 10-15%, Highs 20-30%, Mediums 40-60%, and Lows 10-15%

When promoting and demoting test cases, aspects to consider are how frequently the user will require this feature or functionality. Likewise, how critical is this behaviour to the users day-to-day or month-end activities. Robyn Brilliant provides a list in Test Progress Reporting using Functional Readiness that you could apply when considering the test cases for promotion or demotion:

Using a scale from one to five, with one being the most severe and five the least severe, quantify the Reliability Risk as follows:

  1. Failure of this function would impact customers.
  2. Failure of this function would have a significant impact to the company.
  3. Failure of this function would cause a potential delay to customers.
  4. Failure of this function would have a minor impact to the company.
  5. Failure of this function would have no impact.

This and similar scales can aid you in arriving at your final first cut of test case priorities.

Summary

This is a simplified example of a test case prioritization process. However, it can serve you well as a basis for rapid organizing of your test cases and getting your test schedule, efforts, and which test cases are done when mapped into the project plan.

Remember, how you prioritize your testing tasks and the test cases to be executed will depend on where you are in your project cycle. It is likely that you will re-prioritize your test cases as you move towards release and as you determine by investigation and observation where the risks and defects are manifesting. Establishing your testing objectives up front for each phase and making sure they are reflected in the individual priorities of your test cases will make your life a lot easier when it comes time to explain and execute your plan.

Finally, having prioritized test cases also gives you a good starting place for your potentially pending automation project. Ie: Automate the BVT Priority test cases, measure the benefits, improve the automation, automate the High Priority test cases, etc.

For related reading, check out these articles:

About Trevor Atkins

Trevor Atkins has been involved in 100’s of software projects over the last 20+ years and has a demonstrated track record of achieving rapid ROI for his customers and their business. Experienced in all project roles, Trevor’s primary focus has been on planning and execution of projects and improvement of the same, so as to optimize quality versus constraints for the business. LinkedIn Profile
This entry was posted in  All, Planning for Quality, Test Planning & Strategy and tagged , . Bookmark the permalink.