Philosophy of Defect Resolutions

One of the foundation processes in any company that produces software is the defect lifecycle. It is primarily this process that describes how Development and Testing interact around an issue or defect report.

There is typically an emphasis on how setting and tracking of the severity and/or priority of a defect is done. That rating ties to the likelihood of the defect being encountered in the field and the impact or cost of that encounter – Clearly describing a focus on reducing External Failures as they relate to the Total Cost of Quality within the organization. This is certainly an important area to look for and implement improvements given the potential for high return on investment.


However, the vision of a test manager must look for potential improvements in all areas included in the formula of Total Cost of Quality (Continuous Quality Improvement and Outsourcing) and the definition of interfaces between the different organizational groups and the supporting processes. The test manager must also look to driving the capability of the test team and working with the individuals within that team to be their most effective.

Definition of a Defect

“A software error is present when the program does not do what its end user reasonably expects it to do.” (Myers, 1976).

“The extent to which a program has bugs is measured by the extent to which it fails to be useful. This is a fundamentally human measure.” (Beizer, 1984.).

Attributes of a Defect

There are many attributes that can be ascribed to a defect that are usable in classifying, organizing, and analyzing the associated issue. Aside from a unique Identifier (DefectID), a Description of the issue with Reproduction Steps, and Expected and Actual Results, a defect report might have some of the following:

  • Status
  • Assigned To
  • Priority
  • Severity
  • Functional Area
  • Feature
  • How Found
  • Type
  • Environment
  • Resolution
  • Opened Version
  • Opened By
  • Opened Date
  • Related Test case(s) or Requirement(s)
  • History or Audit Trail

When including these attributes in the recording of defects, and as part of your defect lifecycle, you can leverage the information to make observations and draw conclusions, typically via metrics (Establishing Effective Metrics).

Using Defect Resolution

One of the classifications that can be applied to a defect would be its resolution.

Not all defects are simply fixed by development. Developers may resolve a defect as: not a bug, by design, not reproducible, duplicate bug, etc. as the reason for moving the defect out of their queues.

Cem Kaner suggests in “How To Win Friends, And Stomp Bugs” the following list of choices for the resolution of a defect in your defect database:

  • Fixed: the programmer says it’s fixed. Now you should check it.
  • Cannot reproduce: The programmer can’t make the failure happen. Add details, and notify the programmer.
  • Deferred: It’s a bug, but we’ll fix it later.
  • As Designed: The program works as it’s supposed to.
  • Need Info: The programmer needs more info from you.
  • Duplicate: This is just a repeat of another bug report (cross reference it on this report.).

The resolution field can contain other options in addition to the above, such as Enhancement, Spec Issue, Not Implemented, Deferred, and Third Party.

In this field, you can capture much of the philosophy or underlying meaning of the defect resolution process. The core of this philosophy should center on getting a more detailed picture of the defect counts and on allowing an analysis of that picture for more accurate and useful metrics (without providing too many choices).

For example if a defect is given the resolution:

  • “Fixed” implies that there really was a problem in the code and it has been addressed now.
  • “As Designed” implies that the tester may not have the latest information about the functionality OR may not have the necessary understanding of the product.
  • “Enhancement” implies that the tester has not found a defect per se, but that the issue is a new feature or feature modification request. In other words this is not a defect but it has been implemented in the current release (as opposed to those that have been Deferred). This information is valuable for the future, as these records can then be distinguished from the others for easy collection and inclusion in the requirements document and help files.
  • “Cannot Reproduce” implies that there is not enough information in the report for the developer to be able to reproduce the defect and that the tester needs to clarify/add information or Withdraw the defect. There may be a hardware setup or set-up preconditions required for seeing this defect that needs to be added to the report, or pointed out to the developer if already in the report (eg: the build number or environment information). This resolution should appear only as a transitory value before the true resolution is set.

In each of the examples above, you could come to a similar view if you were looking at just the surface of the defects. However, your experience no doubt also contains many instances (such as deadline pressure) where some resolutions are sometimes inappropriately used as a way for developers to clear out their queues. In these cases it often becomes the job of the tester, through the mechanism of the defect lifecycle, to make sure the defects get to the right audience and aren’t simply let go.

“Need More Info” and “Cannot Reproduce” are examples of resolutions that can create a lot of churn between the developer and the tester. Examination of how many defects get these resolution and the reasons why can provide good insight into training opportunities within the team to reduce this rework. Improvements to the application or tools to help with diagnosis of problems may also suggest themselves.

“As Designed” may be a defect that should not have been logged as implied above. But what if the design is flawed? Referring back to the definition of a defect can be helpful in deciding the next step.

A defined and visible defect lifecycle process provides supports for improved defect resolution. For example, if a defect is resolved “As Designed” or “Deferred” perhaps that defect is then assigned to the business analyst responsible for the functional area to confirm before it goes back to the tester for review, or it might even be escalated to the product manager.

“Duplicate” defects could indicate a higher likelihood of the defect to be encountered, potentially poor organization in terms of resource effort overlap, maybe poor processes in terms of making sure the majority of duplicates are caught before going to development, or even poor training in terms of the resources doing the work.

This is another example of where the defect lifecycle can help in making sure there is stronger chance to reduce the effort spent on duplicate defects. Before assigning a defect to a developer, perhaps the tester reviews all currently open defects (or searches for similar defects via keywords). If it is determined that the defect is a duplicate, the tester then may add any additional information to the existing defect or if the differences are of a greater degree, log a new defect record and relate the two together. Following a similar process to this may mean that duplicate defects are still getting logged, however there is now a valid process behind how they get logged and the testers avoid creating extra work for development and possibly looking like they just aren’t taking enough care in their work.

Summary

Just as defect reports provide valuable insight into the types of common classes or types of defects that have occurred in the past, analysis of other attributes can provide useful information for improvement. Without specific tracking of defect resolutions, the true defect find rate and defect clustering in the code is obscured by duplicates, not repros, by designs, and enhancement requests.

Recording and analyzing this information helps ensure you are able to investigate and address the root causes of these Quality Costs. An adaptive approach to testing processes, communication and training goes a long way to show that you have a strong and capable test team. Including a resolution attribute as part of your defect database, and more specifically within the definition of your defect lifecycle, can help achieve this.

For another view on this, check out my presentation on SlideShare: “Optimizing the Defect Lifecycle – with Resolution

About Trevor Atkins

Trevor Atkins has been involved in 100’s of software projects over the last 20+ years and has a demonstrated track record of achieving rapid ROI for his customers and their business. Experienced in all project roles, Trevor’s primary focus has been on planning and execution of projects and improvement of the same, so as to optimize quality versus constraints for the business. LinkedIn Profile
This entry was posted in  All, Automation & Tools, Planning for Quality and tagged , , , . Bookmark the permalink.