Defect Tracking – Selecting the Right Tool

Effective defect tracking is a critical activity in any software development project, and having the right tool to do the job is just as critical.

A defect tracking system is a tool solution intended for: the tracking of project issues, defects, or bugs. The tool provides the efficient mechanisms for:

  • Recording and communicating such issues to the various team members,
  • Tracking the state of each issue as it moves through the defect lifecycle, and
  • For generating various reports on the data associated with the set of collected issues for further analysis and interpretation.

In “New Project? Where are the Templates?“, it was mentioned that a guiding principle in the software industry, considering the wide range of project scope and constraints, is to “use the processes and tools appropriate to the size, complexity, and impact of your project”. This is certainly true when selecting the right defect tracking solution for your team.

There are literally dozens of publicly available defect tracking tools to choose from. A total of eighty-eight of them are listed on Danny Faught’s www.testingfaqs.org web site.

While it is still possible to track defects by way of email or even on paper, off-the-shelf solutions have the common intent of trying to: accelerate defect resolution; generally improve project organization and resource planning; promote communication within the project team; and increase transparency of and track quality levels for every functional area throughout the project. All reasons to invest some time in looking at the available options for capable tool-based solutions.

Steps to Making the Right Choice

A decision has been made to assess potential solutions, either because you don’t have one or the current solution is not meeting your present or anticipated needs. The following steps outline a process that you can use as a starting place to undertaking an objective tool selection.

Getting Approval for the Assessment

Selecting a defect tracking tool is not necessarily a simple matter, and it will require time and potentially other resources from the organization to complete. A well described ROI proposal to management will go a long way to getting their buy-in to this project.

In a document of only a few pages describe; what is the anticipated ROI of getting a new tool, the major steps in the selection process, and why it is important to undertake this process as part of choosing the defect tracking tool. You will also want to consider these questions: what are the quantifiable goals that the team hopes to achieve with the tool; and are these benefits something that can be measured?

Quantifying these benefits (and the inability of any current solution to provide these benefits) is important to getting through the first gate of determining how much a Defect Tracking solution is worth to the team, and getting a budget defined for the new tool, before you start looking.

Cost will be a big factor and how much you can invest will have a significant impact on which tools you will be able to consider as your solution. Note that the budget must include the actual licenses as well as the assessment itself, and costs for any future training and implementation for the selected tool.

As noted in “Does ROI Matter To You?” by Wolfgang Strigel, ROI is a widely used approach for measuring the value of a new and improved process or product technology. For further information on the ROI calculations that you can apply, refer to “Practical Metrics and Models for Return on Investment” by David F. Rico.

Document the Process of Assessment

Make it clear and visible in a document what steps will be undertaken as part of the selection process. This will avoid confusion or misunderstandings as to any decision gates, short-list criteria, and possible reasons for choices made or delays in the process. You can use this article as a starting place for the outline of this document.

Determine the Method for Evaluation

Define how you are going to measure each tool against the needs of the team and against each other. The solution being assessed needs to be describable in terms of how it is better or worse than an alternative solution and the reasons why need to be recorded in an objective manner.

A sample method of evaluation may be to simply give a rating from 0-10 for each functional requirement where: 0-2 = non-existent or unusable; 3-5 = present but not useful in current form; 6-8 = present but requires configuration or changes; 9-10 = present with little to no configuration or changes needed.

Requirements could be enumerated and grouped into the following:

  1. Critical Requirements: List and describe the critical requirements and why they are critical to your company.
  2. Functional Requirements: List and describe what functionality or abilities the tool must have.
  3. Non-Functional Requirements: List and describe what constraints (cost, environment, quality, etc) the tool must meet or comply with.

The following is a sample of what this section of the assessment document might look like:

ID # Requ. Type Test for Requirement Evaluated Comments
CR01 Critical [the specific measurable requirement to be met] [1-10] [any additional comments, questions, or contextual information]
CR02 Critical
FR01 Functional
FR02 Functional
NR01 NonFunctional

At a minimum, you will need to prioritize the requirements so that you can measure each tool against what is most important to your team. In “Evaluating Tools”, Elisabeth Hendrickson recommends a face-to-face meeting as a good forum for prioritizing:

  • Invite everyone with a say in the tool selection decision.
  • Post a list of the requirements, printed large enough to be read from a distance, on the wall or whiteboard.
  • Everyone at the meeting gets three votes. (If you have a very large number of requirements, you may want to give everyone five votes instead of three.)
  • Each person may cast his or her votes in any combination: one at a time, or multiple votes for a particularly important requirement.
  • At the end of the meeting, tally the votes and the requirements are now prioritized.

Determine the Needs from Stakeholders

One of the most important steps in the tool selection process is to involve representatives of the various groups of stakeholders in enumerating the needs for the solution. It is easy to determine that testers and developers need to have their feature and workflow requirements met. But team leads, project management, technical support and others may all have inputs or need outputs from the Defect Tracking system.

Of course not all functionality is critical – there are “nice to have’s” and “maybe in the future” types of features from each group of stakeholders. Remember to look at both your process and technical needs for requirements. Also, don’t be afraid to challenge or rework the existing processes at this time – it is a great opportunity to improve and strengthen your processes (note: better doesn’t necessarily mean more). In the end the tool that you choose should support (and perhaps guide) your process, but not impose one of its own.

Re-define Needs in a Form for Evaluation

Although a need may be stated easily enough by a stakeholder, it may not be expressed in a measurable manner by the person doing the evaluation of the candidate tools. Similar to business requirements in development of a new software product, these statements express the needs for which the application must provide a solution. But, in order to implement the business requirement accurately, the actual functional requirements need to be enumerated, scoped, understood, reviewed, and signed-off. It is the same case when assessing Off-The-Shelf solutions.

An example of such an expressed need might be that “the tool must be compatible with our development environment”. If this need was expanded to describe exactly how the tool is expected to be compatible by listing specific operating systems, development tools, or other third-party software with which the tool must integrate, then the evaluation of this need can be performed in a much more systematic and objective manner.

Other examples of requirements that are difficult to objectively evaluate and use to compare two solutions might be:

  • “Is completely customizable”
  • “Has an intuitive interface”
  • “Is reliable”

Select Tools for Detailed In-House Evaluation

In the case of defect tracking systems, your preliminary research will uncover a large listing of possible solutions. The critical requirements should be used to limit those that make it past the point where they will be even included in the first cut. This research is typically “hands-off” and will be used to finalize the first cut against initial criteria. Remember, any tool you can eliminate at this point will allow you to spend more time on potentially better matches in the next steps of the assessment process.

In determining what your choices are, you will want to search the Web for vendor websites and relevant lists of tools (such as www.testingfaqs.org), read and participate in forums or newsgroups, possibly attend tradeshows and conferences, and ask co-workers and colleagues in other companies about tools they have experienced. If you work for a large company, ask other divisions what they use (maybe they even did a similar assessment already).

Note: During the assessment process, you may find you need to add or modify your requirements or constraints (perhaps even for budget or time for the assessment itself) in order to arrive at the best choice. Remember to keep such changes and the reasons why, visible to the stakeholder representatives and to those that approve the budget.

Obtain Demos of Top (3) Tools

Prior to deciding on the tools that you want to have demos for, talk with the vendor’s sales people.

Elisabeth Hendrickson warns in her paper that when you ask the sales representatives up front if their tool meets the requirements you have, it’s very likely that they will respond positively. The important part of this conversation is not when the sales person says, “We can do that.” It is when the sales person says, “.and this is how.”

It is also reasonable to ask the sales person how their tool compares to their competition:

  • How can I compare your product with other products on the market?
  • Under what situations is your tool the best choice?
  • Under what situations is your tool probably not the best choice?
  • What features differentiate your tool from the competition?
  • What don’t you like about your tool?

Again, you will want to reduce the number of tools you proceed with to the actual demo step. It is more than likely that you do not have all the time you would need to look at every tool in sufficient detail, so it is better to examine a much smaller group than to sacrifice the depth of the evaluation by looking at a larger group of candidate tools.

During the demos, the objective is to get the selected tool vendors to perform a demonstration of their tool to stakeholder representatives, either in person or on-line, where they prove their earlier claims on how their tool matches all your criteria.

Perform Detailed In-House Evaluation in a Practical Environment

After the demo, obtain evaluation versions of the successful tool(s) for further evaluation on your own to identify potentially hidden problems or irritating behaviours not uncovered in the demo, or to investigate the concerns of the stakeholder representatives expressed after the demo.

Try to arrange to use the tool(s) on an actual project if possible so that the users of the system have an opportunity to experience the solution in a real-world situation and offer feedback. If this is not possible, assign resources to use the evaluation copies of the tool(s) in a simulated project environment.

During this stage of the evaluation, make sure to perform the various activities of the entire process (not just the day-to-day) that your team intends to follow.

Final Selection is Made – Roll-out the New Tool

When rolling out the new tool to your team, remember to make time for training the various stakeholders in how to apply the tool and how it addresses their needs. Also, start collecting data immediately that you can use to measure how well the tool is working in your project environments. This will allow you to track metrics over time and be able to adjust course sooner than later if anything unforeseen begins to develop in implementation and on-going usage.

For more information on selecting tools for your organization and defect tracking tools in particular refer to “Tracking Down a Defect Management Tool” by Hung Quoc Nguyen and “Evaluating Tools” by Elisabeth Hendrickson. The above selection process has been adapted from the tool assessment process described in “Introduction to Practical Test Automation”, a public course available through UBC Continuing Studies as part of the Software Engineering Certificate – Quality Assurance and Testing Track (http://www.tech.ubc.ca/softeng/).

About Trevor Atkins

Trevor Atkins has been involved in 100’s of software projects over the last 20+ years and has a demonstrated track record of achieving rapid ROI for his customers and their business. Experienced in all project roles, Trevor’s primary focus has been on planning and execution of projects and improvement of the same, so as to optimize quality versus constraints for the business. LinkedIn Profile
This entry was posted in  All, Automation & Tools, Planning for Quality and tagged , , . Bookmark the permalink.