New Project? Where are the Templates?

A guiding principle in the software industry, considering the wide range of project scope and constraints, is to “use the processes and tools appropriate to the size, complexity, and impact of your project”.

For instance, each of the following situations demand different approaches to process and tools:

  • You are a company that has had to reduce staff and now work with a very tight budget.
  • Your company’s recent successes have driven significant growth, but these days project closure is not as tidy as it once was.
  • There are strong demands about the level of documentation detail you are required to have in place for each project. You are targeting the European ISO-based market, or your client expects you to develop to the higher CMM standards, or you are developing devices to strict government (eg: FDA) requirements.
  • On your large or multi-phase project you find that you are reinventing the wheel each time someone leaves and key information disappears with them.

We are all familiar with templates – applications that create artifacts typically come with a number of them, with more available for download. Templates are provided with such applications to give a user a starting place for something unfamiliar or something that is repeated often. They provide formatting and style to documents or presentations, canned code modules for development, and capture the procedures and organize the data gathered.

Templates and Their Benefits

Documented methodologies and practices typically include:

  • Process Guides and Procedures – form an overview of the “how and why” steps in the software development, quality, and project management processes, providing a reference and assistance for bringing new staff up to speed quickly.
  • Checklists – succinctly capture the necessary steps to be taken at key points throughout a process, and can be used as safeguards and memory triggers.
  • Templates – support standardized formatting and content consistency for each artifact, and can explicitly define what information should be contained in each document via inline guidance text.

Once the templates have been created and the supporting guidance documents are complete and consistent, you can benefit from rapid reusability and ease of customization. Customizing for a new project simply consists of inserting project specific needs and editing out sections that do not apply this time around – a much simpler task than adding information to an incomplete collection of previous examples or creating them from scratch each time.

Within a template you can capture aspects of industry best practices and then make the conscious choice to address them or not for your project. In fact, templates are a strong component of achieving and maintaining CMM Level-2 Repeatable, where the goal is to show the adherence of software products and activities to applicable standards, procedures, and requirements. This goal would be met in part by the adoption of a set of checklists and document templates that can be applied as project standards. Adding a simple web page to your Intranet that outlines the project lifecycle and which templates to use at each stage will make it quick and easy to communicate these artifacts throughout your organization.

If all team members use a common format, it is much easier for the entire team to create the necessary artifacts and interpret and apply the recorded information. When working with 3rd parties or outsource organizations, templates are an excellent vehicle to ensure the consistency of the work produced – to the point where internally and externally produced artifacts should not appear significantly different – and this greatly facilitates the review of these deliverables.

The following are some of the benefits that can be attributed to proper use of templates:

The Organization
  • New staff are up and running quicker since key practices and procedures are documented and accessible.
  • Kick-start to the creation of project artifacts.
Senior Management
  • Assurance of predictability and repeatability throughout development organization via clearly defined processes.
Project Managers & Team Leaders
  • A framework for defining and communicating a repeatable process.
  • Consistent format and standard for content of deliverables.
  • Access to best practices and their implementation.
  • Facilitates communication of details.
QA & Test Professionals
  • Reduced chance of error through access to documented, consistent procedures and processes.
  • Ability to ensure the project team follows the defined quality standards and guidelines.
Programmers
  • Reduced chance of error through access to documented, consistent procedures and processes.
  • Reduced rework through improved communication and consistent level of details.

The following is a brief sample from a template that contains guidance text for what information is to be captured in the given section:

1. Project Overview

Describe the background and context for the project and why it is being undertaken. Speak to the business value of the work being performed. This section can draw from the Project Vision or similar document. (Remove this comment section from final document.)

1.1 Quality Standards

List any quality standards that the company or organization has previously defined that this project will follow. (Remove this comment section from final document.)

Proceed with Caution

A common mistake with templates is to focus on following the structure of the template, rather than allowing the headings and guidance text to function as a framework for quality content. By themselves, process and tools cannot lead a project to success, and a template is just another tool. Appropriate skill and experience in the project team; the ability to collect and analyze information in the context of the project and the team member’s role, are necessary for successful projects.

Matthew Edwards points out in “Basic Test Form Templates” that “…management can direct a test engineer to prepare a test plan, and even provide training and a planning template. But, there’s no guarantee that the plan will be a good one.”

In “Are Templates Dangerous?”, David Gelperin describes the hazard as follows: “Those who understand testing as a game of checkers see test documentation as an exercise in filling in the blanks of a template. Those who understand the chess-like complexities of testing see doc templates as a guide to recording the results of a difficult decision process.”

A template can identify and organize the important elements for a given artifact; however, to capture the intent of those elements as part of an effective project solution requires experience and insight.

Summary

Although the ROI from creating and using templates is significantly enhanced when implemented as part of an on-going process improvement initiative, defining templates for critical project artifacts is a good first step on the way to allowing your organization to:

  • Define your development, quality and project management processes
  • Implement best practices and standards throughout your organization
  • Communicate these policies, procedures and standards throughout your software organization
  • Create and maintain a single reference point for your entire organization for policies, procedures and standards

Which in turn can lead to:

  • Greater control over projects
  • More effective change management
  • Standardized, streamlined and repeatable processes across your organization
  • Managed schedules and predictable project costs
  • Increased customer satisfaction
Posted in  All, Automation & Tools, Test Planning & Strategy | Tagged , , | Comments Off on New Project? Where are the Templates?

“Bad User” Testing

How can you be sure that an application will behave properly when users perform actions or combinations of actions that were not considered during the development of the functionality? During the testing phase, you have to plan for what is sometimes called “bad user” testing, or negative testing.

Boris Beizer’s definition of negative testing in “Software Testing Techniques” is: “Testing aimed at showing software does not work”.

In his paper, “A Positive View of the Negative Testing”, James Lyndsay stated the objectives of bad user testing to be:

  • Discovery of faults that result in significant failures; crashes, corruption and security breaches
  • Exposure of software weakness and potential for exploitation
  • Observation and measurement of a system’s response to external problems

Why Is Bad User Testing Important?

During negative or bad user testing, the tester seeks to abuse the functionality of the product in an effort to create odd program states by exercising functionality that deals with state management, input validation, boundary conditions, fault recovery, and more.

Bad user testing is generally performed as part of integration or system testing and does not have a distinct phase of its own. The basic principle to follow is if the tester can perform bad user tests without error, there is a significantly lower chance that the users will inevitably find such defects later on.

As Lyndsay points out in is his paper, negative testing can find significant failures and also can produce invaluable strategic information about the risk model underlying testing, and allow overall confidence in the quality of the system.

Where To Start?

It is possible to design a bad user test plan starting from the specification documentation. The most important thing to keep in consideration when designing a bad user test plan is to not test what is described, but what is not. The tester should look at the specification documentation as a guide to what the boundaries of the software and then look beyond the boundary to the extreme edges of the software’s functionality. Employ your creative destructiveness to take you beyond these boundaries and perform actions and tasks that you are sure will fail, or should be impossible. These are the areas where the most interesting defects can be found.

Of course, each product is specific and all tests cannot be applied in all circumstances. The few brief examples below can be used as a guide when starting a bad user test plan. Use your creativity to add to and expand this list.

Generic Bad User Test Scenarios

What is expected behaviour when you:

  • Manually shut down, or reboot, the computer while the application is running
  • Manually restart the computer by pushing the reset button while the application is running
  • Restart the computer while the application is running using the start menu
  • Log off while the application is running

Boundary Test Scenarios

What is expected behaviour when you:

  • Attempt to go below the minimum input limits
  • Exceed the maximum input limits

Stress Test Scenarios

What is expected behaviour when you:

  • Run the program concurrently with many other programs
  • Set memory to a stressed state (low Virtual Memory, low RAM)
  • Run on slower or older machines
  • Load large volumes of data
  • Create large numbers of concurrent connections or open files

Performance Test Scenarios

What is expected behaviour when you:

  • Test on a system lower than the minimum configuration requirements
  • Open extremely large files
  • Attempt to import a corrupt file
  • Create a situation requiring an error message

Install/Uninstall Test Scenarios

What is expected behaviour when you:

  • Run the installation script again while the application is running
  • Cancel the installation midway through the install using the task manager
  • Install the application onto a hard drive without enough free space
  • Install the application onto a network drive and disconnect the drive partway through
  • Uninstall the application when the application is still running
  • Change or remove the registry settings before trying to uninstall the application

Summary

Bad user testing is performed by the tester to find weaknesses in the design or the code by attempting actions that are likely to occur after deployment. Although it is hard to make predictions about real-world use, real users will find all ways of using the system including those that were considered unreasonable or improbable.

Including this type of testing as part of your project will result in a more robust and user-friendly application and will of course save effort and costs after the product is released.

Posted in  All, Test Planning & Strategy | Tagged , | Comments Off on “Bad User” Testing

Error Messages and How to Improve Them

Error messages are displayed by applications in response to unusual or exceptional conditions that can’t be rectified within the application itself.

The need for “useful error messages” can be defined, in the simplistic case, to be a need for some form of error handling and reporting that enables the user to understand what has happened in the case of an error and what must be done to remedy the situation.

Most testers are no doubt familiar with the feeling of reluctance to log usability issues, fearing that they could be misunderstanding the functionality or they are “wasting valuable time reporting trivial bugs”. The project team can further drive this feeling by tending to postpone or ignore such issues under the premise that “at least there is some feedback isn’t there?”, or “there isn’t going to be time to address those kind of issues”, and besides “the user wouldn’t do that.”

Issues with Error Messages

“Error messages are often less than helpful or useful because they’re written by people who have an intimate knowledge of the program. Those people often fail to recognize that the program will be run by other people, who don’t have that knowledge.” Michael Bolton, 1999.

Furthermore, Byron Reeves and Clifford Nass suggest in ‘The Media Equation’, that even text-only interfaces are felt by users as having some “personality” and that “people respond socially and naturally to media.”

As noted by Julianne Chatelaine in ‘Polite, Personable Error Messages’, Byron Reeves and Clifford Nass determined that if the application does not have the ability to assess each user’s personality and adapt to it, the next best thing is to select one personality or tone and be consistent to avoid contributing to confusion and even dislike. The published TME findings were underscored by Nass’ remarks at UPA ’97 where he said that when an application’s textual messages were written by a variety of different people, using different styles and degrees of strength or dominance, it made the product seem “psychotic.”

Guidelines for Error Messages

“You may design the perfect system but eventually, your system will fail. How it does so, however, can make all the difference in the world in terms of usability.” Tristan Louis, ‘Usability 101: Errors’.

“The guidelines for creating effective error messages have been the same for 20 years.” Jakob Nielsen, ‘Error Message Guidelines’.

The following checklist, compiled from several of the referenced sources, will help you confirm that your application meets basic usability requirements with respect to error messages.

  • Message Exists: the problem with an error is often that no message is actually attached to it. Notify the user when the error happens, every time it happens. The error may be due to a flaw in the software or a flaw in the way the user is using the software but if the user doesn’t know of the error, they will assume that the problem is with the software.
  • Polite Phrasing: the message should not blame users or imply that they are either stupid or doing something wrong, such as “illegal command.”
  • Visible and Human-readable: the message should be highly noticeable and expressed clearly in plain language using words, phrases, and concepts familiar to the user rather than in system-related terms.
  • Precise Descriptions: the message should identify the application that is posting the error and alert the user to the specific problem, rather than a vague generality such as “syntax error”.
  • Clear Next Steps: error messages should provide clear solution steps and/or exit points. An application should never capture users in situations that have no visible or reasonable escape.
  • Consistent: users should not have to wonder whether words, icons, colours, or choices mean the same thing or not in different instances.
  • Helpful: the message should provide some specific indications as to how the problem may be resolved and if possible let users pick from a small list of possible solutions. Links can also be used to connect a concise error message to a page with additional background material or a detailed explanation of the problem and possible solutions. Finally the message should provide extra information, such as an identifying code, so that if technical support is helping the end-user they can better analyze and remedy the problem.

Error Message Presentation

When deciding on the style your error messages will adhere to, you should consider the presentation of your error message:

  • Tone: be firm and authoritative, stating the facts of the situation in a neutral and business-like manner.
  • Colour: an error message printed in red may call attention to itself, but to use colour solely as the way to present an error message is generally a poor idea. People that are colour blind for example will not read the text with any additional meaning attached to it.
  • Language: if your application is used by people in different countries, consider that your error messages will have to be translated and need to be presented in a format flexible enough to accommodate the translated text.
  • Icons: if you use icons to present your error messages make sure they are intuitive to the end-users and that they are appropriate to the circumstance of the error message.

To highlight how careful you should be when considering icons, Tristan Louis cites in ‘Usability 101: Errors’, the case of when an Apple Macintosh crashed, it used to show an icon presenting a little bomb with a burning fuse along with the message in the error dialog. He comments that users in many countries were terrified by this icon and would not touch the computer for fear that it would actually explode.

Summary

Remember that errors will happen but what will make all the difference is if they are handled properly. Unclear and unhelpful error messages tend to mean that errors will recur, or take longer to resolve. The resultant frustration can lead users to mistrust the interface or even abort the task in question.

Your error message must convey useful information — useful information saves time and for more than just the end-user. The message will also need to be understood by and useful to the technical support person who handles the call, the quality assurance analyst who helps to track down or replicate the problem, and the maintenance programmer who is charged with fixing the problem in the code. Each person in this process represents a cost to your company, cost that could be greatly mitigated by a small investment made now.

Posted in  All, Planning for Quality | Tagged , | Comments Off on Error Messages and How to Improve Them

Pot-holes on the Road to Automation

Testing costs can be a significant part of the project with software project managers spending up to half of their project budget on testing. But how do you make testing more cost effective so that you are getting more done with less?

One effective solution is automated testing or “tool assisted test activities performed with the objective of evaluating the software against pre-defined results/expectations that require no operator input, analysis, or evaluation.” [QA Labs Inc., 1999]

Many people try to add automation to their projects, only to end up frustrated, and annoyed. After one or two disastrous attempts, many just give up and stop trying. However, implementing automated testing is a basic cost-benefit analysis.

It is well-recognized that an automation undertaking will require significant investment up-front before actual savings can be experienced. At the same time, the positive effects of having automation can be experienced by the organization in advance of the anticipated actual break-even point.

To help approach the undertaking of automation from a realistic perspective, keep in mind that automated testing is not:

  • Immediate effort reduction
  • Immediate schedule reduction
  • A silver bullet to find all the application defects
  • Automatic Test Plan Generation for 100% test coverage
  • One tool fits all and is easy to use
  • Cheap to implement

The savings and benefits will come, though, if you recognize and plan for the following.

Avoiding Pot-holes

The following are some typical issues that those implementing test automation run into:

  • Pot-hole: Test automation is not treated as a project with proper project planning and design
    Solution: Treat test automation as you would a development project and manage the scope, resources and schedule appropriately. Implement a pragmatic approach to testing such that: the project can be decomposed into modular, defined tasks with assigned resources and timelines; others can easily carry forward the process that has been defined; the effort and results are quantifiable; each test cycle becomes more efficient in uncovering defects; and the most critical test types and application functionality are targeted.
  • Pot-hole: No reusability (use of functions and utilities) in automation scripts
    Solution: Implement an effective test automation framework through abstracting navigation, data access, verifications, reporting, and other common functions into libraries to modularize scripts thereby minimizing maintenance costs as there are changes to the application functionality.
  • Pot-hole: Testers untrained in programming techniques are assigned automation tasks
    Solution: Testers performing test automation must be able to create and maintain automated test scripts. This requires strong knowledge of software development practices, experience with procedural programming languages, and experience with the test automation tool to be used.
  • Pot-hole: Automation test suite is not maintained
    Solution: Test suites need to be maintained with each new build and release of an application. Maintenance of robust scripts typically requires ~10% of the time of originally creating the automation scripts, assuming an automation framework is firmly in place and that major additions or redesigns are not being done to the application.
  • Pot-hole: Testing is typically performed at the end of the project life cycle
    Solution: The test process should begin where the development process does, at the beginning. Moving testing up the life cycle increases the ability to find defects sooner and provides more time for effective test planning, design, execution and tracking, and stability for automation.

Scripting Best Practices

Test Automation Scripts are software. And even if the current intended use is for testing the current project only, you never know where those scripts could end up, or for what other purpose they could be used.

You may develop a framework and individual scripts for one version of the product, but require subtle modifications for a version of the product customized for another customer.

You may have large automation effort ongoing, including potentially multiple applications or versions of your product and have multiple people modifying the core framework scripts.

To help in succeeding with an automation undertaking, keep the following best practices at the front of your mind:

  • Document
  • Manage expectations
  • Keep in mind the overall project testing goals
  • Plan to control progress and recovery of the execution of the testing
  • Control the scope of the automation
  • Use coding standards when writing your scripts
  • Version Control is important
  • Get early feedback
  • Test your scripts!

Summary

“… when all the pieces come together – the right people, the right processes, the right time, the right techniques, the right focus – then we can achieve truly impressive returns on our testing investment.” [Investing in Software Testing, Rex Black]

If implemented thoughtfully, the automated test suite will prove to be much more efficient than manual testing in terms of hours spent and defects uncovered in previously manually tested functionality. The automation suite can be left unattended to run at night, on weekends and holidays. The tools never get bored or tired and never assume the application/architecture works while emulating as many users as needed, accessing the application and performing any mix of transactions desired.

Therefore, the return on investment (ROI) of test automation can be tremendous, as long as you can avoid the pot-holes along the way.

For related reading, check out these articles:

Posted in  All, Automation & Tools, Planning for Quality | Tagged , | Comments Off on Pot-holes on the Road to Automation

Counting On Requirements

How do you know what a system is supposed to do and what it is not supposed to do? Requirements are intended to create an easily validated, maintainable and verifiable document describing a system’s planned functionality.

What is lacking in today’s requirements? Requirements are typically described in natural language because both the customer and vendor must understand them. However, many words and phrases have meanings that can be interpreted based on the context in which they are used. Requirements described in this form can have several severe problems including: ambiguity, inaccuracy and inconsistency.

The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. The success of a project, both functionally and financially, is directly affected by the quality of the requirements. [Doing Requirements Right the First Time!, Theodore F. Hammer, Linda H. Rosenberg, et al., 1998]

Who Needs Requirements?

The communication of requirements is the most critical, difficult, and error-prone task in IT projects. Research has shown that projects that proceed to the coding phase with missing or incorrect requirements are almost certain to fail. [A Systematic Approach for More Effective Communication of Functional Requirements and Specifications, Bill Walton]

Consider the various stakeholders or audiences for the requirements – on which each is dependent for their own role in the project lifecycle:

  • Customer – Statement of work for vendor, acceptance criteria
  • Marketing – Documented product capabilities
  • Project Management – Estimates, project plans, project goals, risk planning
  • Development – Design and coding of the system
  • Testing – Verification of the system
  • Technical Writing – User manuals and tutorials

Unless there is effective communication of information the same words can mean different things to each party, or different words can appear to repeat the same thing, becoming a source of misunderstanding and therefore error.

Quality Requirements

Customer dissatisfaction often surfaces at the acceptance phase as discrepancies appear between what was built and what the customer thought was being built – a clear result of lack of quality requirements.

Some of the commonly recognized crucial attributes of quality requirements are:

  • Completeness
  • Consistency
  • Correctness
  • Understandability
  • Unambiguousness

These quality attributes tend to be considered subjectively. However, some of these quality attributes can be linked to indicators to provide evidence that the attribute is present or not. The NASA Goddard Space Flight Center’s (GSFC) Software Assurance Technology Center (SATC) defines categories of quality indicators related to individual specification statements as: Imperatives, Continuances, Weak Phrases, Directives, and Options.

The SATC’s studies give the following specific words and phrases to be indicators of a document’s quality as a specification of requirements:

  • Imperatives – Words and phases that command that something must be done or provided. The number of imperatives is used as a base requirements count. Eg: Shall, must or must not, is required to, are applicable, responsible for, will, should
  • Continuances – Phrases that follow an imperative and introduce the specification of requirements at a lower level, for a supplemental requirement count. Eg: As follows, below, following, in particular, listed, support
  • Directives – References provided to figures, tables, or notes.
  • Weak Phrases – Clauses that are apt to cause uncertainty and leave room for multiple interpretations measure of ambiguity. Eg: Adequate, as applicable, as appropriate, as a minimum, be able to, but not limited to, be capable of, effective, easy, effective, if effective, if practical, not limited to, normal, timely
  • Incomplete – Statements within the document that have TBD (To be Determined) or TBS (To Be Supplied).
  • Options – Words that seem to give the developer latitude in satisfying the specifications but can be ambiguous. Eg: Can, may, optionally

The SATC developed the Automated Requirements Measurement (ARM) tool which project managers can use to assess the quality of their requirements documents easily and on an on-going basis during the life of the documents. The ARM tool searches the requirements document for terms the SATC has identified as quality indicators.

Requirements Style Guide

Similar to a development coding standard, a requirements style guide outlining when and where such terms and structures must and alternately may be used can help maintain control over ambiguity, consistency, and completeness.

However, although many organizations have published documentation standards that include standards for specifying requirements, none are universally accepted. The standards that are imposed seldom go beyond providing an outline or template of the general information to be provided. In many cases no style guidelines are established. As a consequence, requirements documents from various sources tend to bear little resemblance to one another. [Automated Quality Analysis Of Natural Language Requirement Specifications, William M. Wilson, Linda H. Rosenberg, Lawrence E. Hyatt]

The ultimate symptom of vague requirements is that developers have to ask the author, analyst or customers many questions, or they have to guess about what is really intended. The extent of this guessing game might not be recognized until the project is far along and implementation has diverged from what is really required. At this point, expensive rework may be needed to bring things back into alignment. [Karl Wiegers Describes Ten Requirements Traps to Avoid, Karl Wiegers, 2000]

Karl goes on to suggest that requirements authors avoid using subjective and ambiguous words like minimize, maximize, optimize, rapid, user-friendly, easy, simple, often, normal, usual, large, intuitive, robust, state-of-the-art, improved, efficient, flexible, “and/or” and “etc.”. Indicate current uncertainties or areas to further clarify with “TBD” markers to make sure they get resolved before design and coding proceeds.

Summary

Quality documentation is complete, clear and concise. These used to be considered intangible concepts, difficult to measure. With a well defined style guide that addresses the quality attributes like those identified by SATC, metrics can be rapidly developed and analyzed to reveal the strengths and weaknesses of the requirements documentation.

Most projects come with that sense of urgency attached – we need to start coding and we need to start now. In this kind of rushed atmosphere, it’s hard to convince yourself to take the time to even perform documentation activities at all, let alone to develop and apply a new documentation technique.

So why should you go to the trouble? Because addressing the aspects of quality (or lack thereof) on a project leads to working smarter rather than harder, better products, more satisfied customers, and higher profits. But the perfect and easy time for doing the things you know you should do will never come.

Remember, industry data suggests that approximately 50 percent of product defects originate in the requirements. Perhaps 80 percent of the rework effort on a development project can be traced to requirements defects. Anything you can do to prevent requirements errors from propagating downstream will save you time and money. [Inspecting Requirements, Karl Wiegers, 2001]

Read “A Methodology for Writing High Quality Requirement Specifications and for Evaluating Existing Ones” by Linda Rosenberg for more information on the NASA SATC quality attributes (slides 37-68) and a description of the free ARM tool (slides 125-133, no longer available for download).

Posted in  All, Estimation for Testing, Planning for Quality, Requirements & Testing | Tagged , , , | Comments Off on Counting On Requirements

Estimating for Testing

Each of us has very likely had to do an estimate in the past, whether it was for a set of assigned tasks, for a project, or for an entire organization. As a tester, the question is commonly presented as, “How long will it take to test this product – and what resources will you need?” and then the person asking stands there, likely somewhat impatiently, and waits for the answer.

One common approach is to not base test effort on any definitive timeframe. The testing simply continues from when the code is ready until some pre-decided timeline set by managerial personnel is reached. Another common approach is to estimate testing effort as a percentage of development time. Development effort is estimated using some techniques such as Lines of Code or Function Points and the allocated test effort is derived using a pre-determined ratio. Both practices rely heavily on the test team’s ability to work to the strategy and uncover the significant defects up front. However, with this approach there is little ability to invest in planning the test effort or creation of tests. These methods are not based on any assessment technique that takes into account the additional complexities of the test effort, such as deployment configurations and human language support.

As noted by Capers Jones in Assessment and Control of Software Risks, most projects overshoot their estimated schedules by anywhere from 25% to 100%, but some few organizations have achieved consistent schedule-prediction accuracies to within 10% and 5%. Just as it is critical to offer something more than an off-the-cuff answer for the development activities, it is important to know how to perform an estimate of the testing effort for a project.

A good simple definition of an estimate consists of a description of the size or scope of the undertaking, the level of confidence or uncertainty in the estimate at the time that it is made, and a description of the technique used to arrive at the estimate. “It is very difficult to make a vigorous, plausible, and job-risking defense of an estimate that is derived by no quantitative method, supported by little data, and certified chiefly by the hunches of the managers,” according to Fred Brooks in The Mythical Man-month.

Getting a Starting Point

The basic elements to consider when performing an estimate for test effort is the size of the system to be tested, the components of the system, the quality requirements for each component, the resources available, and the level of productivity of those resources. From these elements we are first able to determine the overall size of the test effort in terms of test cases or verification points (eg: within a Use Case). Then considering the resource availability and level of productivity the total effort and schedule can be determined.

Regardless of the uncertainties and risks that may come into play at the beginning or during the project, we still need to get a starting number around which we can base our estimate. A well-defined requirement or specification, being a structured document (or documents) that likely follows certain standards in authoring and describes the scope of the system to be produced, is the best source for producing an estimate. These qualities allow the system to be sized using a variety of techniques that can quantify both the systems’ functionality and its complexity (eg: performance, stress, security, and other non-functional requirements).

With an algorithmic approach to generating an estimate, the first step is to enumerate the collected requirements. If a requirements style guide has been used it should be easy to identify the number of requirements captured in the text. Also consider the number of screens and input fields for each. You may find it useful to group the requirements by type: imperative, weak phrase, list, etc. and weight them with the number of estimated tests for each type. Next further group and weight the requirements by complexity and the ability of developers to implement. Here you can make use of historical information for the organization or team from past projects – perhaps look at bug counts for the modules that are similar to those in your project or the estimate overruns for the different phases of the project.

That sounds straightforward and simple; just count the requirements, group them, and apply a set of formulae. But is it that easy? As Steve McConnell comments in Rapid Development about the accuracy of the first estimate of the project, “Some organizations want cost estimates to within plus-or-minus 10 percent before they’ll fund work on requirements definition. Although that degree of precision would be nice to have that early in the project, it isn’t even theoretically possible. That early, you will do well to estimate within a factor of 2.”

More Than Test Execution

Depending on information and time available, your formulae can be made increasingly complex to factor in different influences and trade-offs in scope, resources and schedule. As more understanding of what influences your estimates is gained and more iterations of the estimate are completed you may find your model increasing in sophistication; similar to the increase in understanding gained between Dalton’s and Bohr’s atomic models.

However, you can still rapidly complete a list to account for all the activities you perform in a project that are related to actual test execution such as:

  • Reviews of requirements and designs
  • Test strategies and test plans (including test cases)
  • Test analysis/matrices and test data preparation
  • Test automation

These “overhead” factors to test execution depend on the quality requirements and extent of investment in upfront planning. The percentage of effort as it relates to test execution can often be directly tied back to the number of test cases or verifications calculated earlier and therefore be ‘formularized’.

Uncertainty Factors, Multipliers, and Other Influences

Counting the requirements and applying formulae is certainly the basis of the approach, however there are a number of uncertainty factors, multipliers and influences to be considered when examining the project for test effort.

  • Are requirements, designs, plans, and so forth available and are these documents clear, concise, and accurate?
  • Do project stakeholders have realistic expectations in terms of schedules and functionality?
  • Are there clearly defined milestones during the project for testing? (eg: Alpha, Beta, Gold)
  • How well managed are the change control processes for project and test plans, requirements, designs, and code?
  • Does the project team have the skills, experience, and tools needed for this project?
  • Is the project team established or is there expectation of ramping up or turnover during the life of the project?
  • To what extent can the project re-use test assets from previous projects?
  • What is the required investment in the test environment set-up and maintenance?
  • Have meetings, vacations, and sick times been built into the schedule?
  • How many builds are planned to be delivered to testing? What if there are additional builds required or what if one is delayed?
  • How many deployment configurations are to be supported and need to be tested? Do all of them need to be tested to the same degree?
  • How many human languages are to be supported? Are special skills required for this type of testing?
  • What amount of non-functional testing is required or planned?

All of the above uncertainty factors can be mitigated through upfront planning and investment. But if you don’t have the time to address training issues, requirement reviews for clarity and testability, or change control standards make sure to take this into account when considering the certainty of your estimate.

Finally, don’t just have one person do the estimate. Discussion of differences in numbers can make visible and clarify assumptions or advantages of approaches. Don’t forget to include a level of confidence in each phase of the estimate and the final overall number. A Guide to the Project Management Body of Knowledge (PMBOK), Project Management Institute defined an estimate as: “An assessment of the likely quantitative result. Usually applied to project costs and durations and should always include some indication of accuracy (+- x percent).” As an estimate is refined as the project progresses, the effort may change but the confidence level should rise.

Summary

Benefits of Quality and Costs of Quality are in a balance and it is important to find and commit to the right equilibrium when facing the continuing challenge of not enough time for testing. There are many approaches to estimation of the test effort of a project and though the outlined approach described above is in no way rigorous, it offers the ability to approach the task in a systematic manner with a defined technique and supporting data – a significant practical advantage over ad hoc techniques that allows further research and experimentation to improve the methods used to arrive at valid estimates.

Posted in  All, Estimation for Testing, Planning for Quality, Test Planning & Strategy | Tagged , , | Comments Off on Estimating for Testing

Testing Without Requirements

A typical software project lifecycle includes such phases as requirements definition, design, code and fix. But, are you shipping software applications with minimal to no requirements and little time for testing because of time-to-market pressures? Build it, ship it, then document and patch things later.

‘Time-to-market’ products can lack detailed requirements for use by testing because of their huge pressures for quick turnaround time. You don’t want to slow down development, or testing, by having to create detailed documentation. At the same time, the test effort needs to be useful and measurable. So, how can this product be tested to achieve adequate effective coverage and overall stability of its functionality?

A Starting Point

Effective ad-hoc testing relies on the combination of tester experience, intuition, and some luck to find the critical defects. Adequate test coverage involves a systematic approach that includes analyzing the available documentation for use in test planning, execution, and review. Ad-hoc testers can greatly benefit from up-front information gathering, even when they don’t have time for formal testing processes and procedures. Ad hoc testers must still understand how the software is intended to work and in which situations.

Ask developers, testers, project managers, end users, and other stakeholders these basic questions to assist with clarifying the product’s undoubtedly complex tasks:

  • Why is the system being built?
  • What are the tasks to be performed?
  • Who are the end users of the system?
  • When must the system be delivered?
  • Where must the system be deployed?
  • How is the system being built?

Also, the risks of the system need to be identified. (See “Risk Based Testing – Targeting the Risk” for more on this) Correlate these risks against the time available to prioritize the test focuses.

With this information you are well on your way to being able to define an applicable strategy for your upcoming test effort.

User Scenarios

User Scenarios (sometimes called Use Cases) define a sequence of actions completed by a system or user that provides a recognizable result to the user. A user scenario is written in natural language that pulls together from a common glossary. The user scenario will have the basic or typical flow of events (the ‘must have’ functionality) and the alternate flows. Creating user scenarios/use cases can be kick-started by simply drawing a flowchart of the basic and alternate flows through the system. This exercise rapidly identifies the areas for testing including outstanding questions or design issues before you start.

Benefits of creating user scenarios:

  • Easy for the owner of the functionality to tell/draw the story about how it is supposed to work
  • System entities and user types are identified
  • Allows for easy review and ability to fill in the gaps or update as things change
  • Provides early ‘testing’ or validation of architecture, design, and working demos
  • Provides systematic step-by-step description of the systems’ services
  • Easy to expand the steps into individual test cases as time permits

User scenarios quickly provide a clearer picture of what the customer is expecting the product to accomplish. Employing these use cases can reduce ambiguity and vagueness in the development process and can, in turn, be used to create very specific test cases to validate the functionality, boundaries, and error handling of a program.

Checklists

Are there common types of tasks that can be performed on the application? Checklists are useful tools to ensure test coverage of these common tasks. There may be a:

  • User Interface checklist
  • Error and Boundary checklist
  • Certain Features (eg: Searching)

Benefits of creating checklists:

  • Easy to maintain as things change
  • Easy to improve as time goes by
  • Captures the tests being performed in a central location

Checklists used in conjunction with User Scenarios make a powerful combination of light-weight test planning.

Matrices

A test matrix is used to track the execution of a series of tests over a number of configurations or versions of the application. Test matrices are ideal when there are multiple environments, configurations, or versions (builds) of the application.

Benefits of using test matrices:

  • Easy to maintain as priorities and functionality change
  • Simple to order the functional areas and the tests in each areas by priority
  • Clear progress monitoring of the test effort
  • East to identify problem areas or environments as testing proceeds

Test matrices provide a clear picture of what you have done and how much you have left to do.

Summary

If you have minimal to no requirements there are still ways that effective testing can be achieved with a methodical approach. You can quickly outline a methodology for yourself that considers the basics of:

  • Describing the application in terms of intended purpose
  • Identifying the risks of the application
  • Identifying the functionality of the application with basic and alternate flows
  • Identifying and grouping common tests with checklists
  • Identifying how testing records will be traced
  • Revisiting and refining each of the above as the project and testing effort proceeds
Posted in  All, Agile Testing, Requirements & Testing, Test Planning & Strategy | Tagged , , , , , , , | Comments Off on Testing Without Requirements

Create a Lightweight Testing Framework

Are you ready to take on the challenges of the new project? Are you knowledgeable of the latest in testing tools and techniques? What will be the ramification of not testing the product adequately? How will this impact your future business? Can you afford not to test?  You need a lightweight testing framework.

Exhaustive software testing standards, frame-works, and techniques, promote the notion that a robust variety of testing techniques and structures will increase the likelihood that defects will be uncovered. But with tight project budgets and short timelines, the accompanying bureaucracy and documentation can greatly reduce the interest in formalized testing, structured or otherwise.

It is well understood that a higher quality product demands a higher upfront price and compromises regarding quality versus cost are made every day. However, the best approach is not likely the most structured or complicated one. Rather a sophisticated approach is required that maximizes the value of the resources available within the organization and without. A good starting place to developing this sophisticated approach is to examine the fundamental skills that make great testers great, enabling them to draw upon their almost innate ability to find the crucial defects quickly.

From this examination it is very probable that you will generate ideas for:

  • Managing iterative development and test cycles
  • Creating reusable sets of tests and test data
  • Compressed delivery schedules
  • Newly refactored architectures
  • Standardized project metrics
  • Migrating from simple to complex deployments
  • Changing customer expectations

A few simple principles can provide a lightweight testing framework from which to grow and adapt the test approach. Keeping things simple will make it easy for the benefits and costs to drive further evolution. While there are continuously new development tools and programming languages, many testing requirements remain the same, and simply require additional emphasis within and by the test team.

  • Examples – having previous projects from which examples of the “best of breed” can be drawn for each type of process, document, or test technique is invaluable in giving the next project a giant jump start in how to approach the test effort. Asking, “how did we do it last time and what can we improve?” will drive forward improvements to your testing framework strongly, and with frequent project iterations, rapidly.

From these previous project examples you can begin to derive reusable tools for your framework:

  • Guidance Process – generic process frameworks and best practices that can be applied to most project types and be ingrained as habit as much as in any documentation.
  • Templates – light-weight documents focused on capturing the critical information and not on keeping resources busy with technical writing.
  • Checklists – lists of test, lists of tasks, matrices of test configurations that allow you to rapidly document and check off what has been done and see what is left to do

Note: If you do not have your own history of previous examples there are many resources on the web where others share their experiences and advice such as at www.stickyminds.com

In one such example, James Bach of Satisfice.com provides a number of whitepapers and articles on Exploratory Testing wherein he has a set of mnemonics and heuristics in his toolkit. One of these mnemonics is SFDPO where the letters stand for Structure, Function, Data, Platform, and Operations.

  • Structure – what the product is
  • Function – what the product does
  • Data – what the product processes
  • Platform – what the product depends upon
  • Operations – how the product will be used

Using rules and checklists such as this allow you to quickly focus your test idea generation and ensure that you have systematically visited the major aspects of the product.

“SFDPO is not a template or a test plan, it’s just a way to bring important ideas into your conscious mind while you’re testing. It’s part of your intellectual toolkit. The key thing if you want to become an excellent and reliable exploratory tester is to begin collecting and creating an inventory of heuristics that work for you. Meanwhile, remember that there is no wisdom in heuristics. The wisdom is in you. Heuristics wake you up to ideas, like a sort of cognitive alarm clock, but can’t tell you for sure what the right course of action is here and now. That’s where skill and experience come in. Good testing is a subtle craft. You should have good tools for the job.” – James Bach, How Do You Spell Testing?

However, even with the best tools and techniques, a test team can’t create the kind of return on investment managers require as long as the test efforts don’t start early and don’t involve all appropriate stakeholders and participants. When developing your testing processes, look for those improvements where:

  • Errors are detected and corrected as early as possible in the software life cycle
  • Project risk, cost, and schedule effects are lessened
  • Software quality and reliability are enhanced
  • Management visibility into the software process is improved
  • Proposed changes and their consequences can be quickly assessed

“… when all the pieces come together – the right people, the right processes, the right time, the right techniques, the right focus – then we can achieve truly impressive returns on our testing investment. Significant reductions in post-release costs are ours for the taking with good testing. In cost of quality parlance, we invest in upfront costs of conformance (testing and quality assurance) to reduce the downstream costs of nonconformance (maintenance costs and other intangibles associated with field failures).” – Rex Black, Investing in Testing: Maximum ROI Through Pervasive Testing

Good of luck crafting the best lightweight testing framework that works for you and your team.

Posted in  All, Agile Testing, Automation & Tools, Test Planning & Strategy | Tagged , , , | Comments Off on Create a Lightweight Testing Framework

Performance Testing and the World Wide Web

Today’s client/server systems are expected to perform reliably under loads ranging from hundreds to thousands of simultaneous users. The fast growing number of mission critical applications (e-commerce, e-business, contents management, etc.) accessible through the Internet makes web site performance an important feature for success in the market.

According to a broad statement in the white paper Web Performance Testing and Measurement: a complete approach, by G. Cassone, G. Elia, D. Gotta, F. Mola, A. Pinnola: “…a survey [in the US] has found that a user typically waits just 8 seconds for a page to download completely before leaving the site!”

Organizations need to perform repeatable load testing to determine the ultimate performance and potential limits of a system on an on-going basis. Poor performance can have direct negative consequences on the ability of a company to attract and retain its customers. Controlling performance of web site and back-end systems (where e-business transactions run) is a key factor for every on-line business.

“Performance Testing” is the name given to a number of non-functional tests carried out against an application. There are three main elements that often comprise what is called Performance Testing. These are:

  • Performance Testing – Concentrates on testing and measuring the efficiency of the system.
  • Load Testing – Simulates business use with multiple users in typical business scenarios, looking for weaknesses of design with respect to performance.
  • Stress Testing – Sets out to push the system to its limits so that potential problems can be detected before the system goes live.

A difference between performance and load testing is that performance testing generally provides benchmarking data for marketing purposes, whereas load testing provides data for the developers and system engineers to fine-tune the system and determine its scalability.

With load testing, you can simulate the load generated by hundreds or thousands of users on your application – without requiring the involvement of the end users or their equipment. You can easily repeat load tests with varied system configurations to determine the settings for optimum performance. Load testing is also particularly useful to identify areas of performance bottlenecks in high traffic web sites.

Top five questions to ask yourself when considering load testing:

  • Do you experience problems with performance in production?
  • What is the cost of downtime, including monetary, person hours, opportunity cost, customer satisfaction, and reputation?
  • Does your application scale with an increase of users?
  • Do you have a method for obtaining real performance metrics?
  • How do you repeat/reproduce a performance problem?

Defining exactly what you want to get from this type of testing is fundamental. In a comprehensive approach, there are some major questions that have to be considered:

  • Who are your end users?
  • How can you monitor their experience with the system?
  • How can you translate these measurements into solutions?
  • What tools and methods can help?

With the answers to the above questions you get started with:

  • Strategy and Planning
    • Define your specific performance objective.
    • Specify the types of users to generate the necessary load.
    • Define the scenarios that simulate the work and data flow.
    • Define how the scenarios will be measured and tested.
    • Define the repository for storing the data to be collected.
    • Plan your test environment.
    • Identify appropriate tools.
  • Development
    • Develop/customize test scripts which simulate your user’s behaviour.
    • Configure your test environment.
  • Execution
    • Execute your test scripts to simulate user load.
    • Monitor the system resources.
  • Result Analysis
    • Analyze and interpret the results.
    • Isolate and address issues.
    • Tune your implementation.
    • Plan for future marketing requests.
Posted in  All, Automation & Tools, Test Planning & Strategy | Tagged , , | Comments Off on Performance Testing and the World Wide Web

Continuous Quality Improvement and Outsourcing

The credit for developing the concept of Total Quality Management (TQM) is given to Dr. W. Edwards Deming, who was requested by the Japanese government of 1950 to come and assist them with turning around the public perception of the poor quality of Japanese products. Following his principles of “total quality control” to continually measure and correct your progress in achieving customer service goals, in less than a generation, “Made in Japan” became associated with quality product.

Sometimes Deming’s concepts are referred to as Continuous Quality Improvement (CQI) to reflect that improving quality is a continuous process following the never-ending cycle of “Plan-Do-Check-Act”.

Benefits of Quality

Having a high quality product translates into a large number of benefits to the organization such as:

  • Less time reworking the code and re-testing interim bug fixes and patches.
  • Effort for product updates can focus on new features rather than bug-fixing.
  • Low levels of technical support calls.
  • No refunds and recalls of the product.
  • Lower expense of supporting multiple versions of the product in the field.
  • Good publicity rather than bad.

With a product recognized as high-quality, the number of referential accounts, total sales and customer goodwill will be much higher.

Costs of (Poor) Quality

In “Quality Cost Analysis: Benefits and Risks” (Software QA, Volume 3, #1, 1996), Cem Kaner defines quality costs as those costs associated with preventing, finding, and correcting defective work. He notes that these costs can be of the order of 20% – 40% of sales, and that many of these costs can be significantly reduced or completely avoided through the involvement of effective quality engineering.

In his article, Cem Kaner outlines four types of costs that when added together comprise the overall cost of the current quality of the application. The four types of costs are:

  • Prevention – Costs of activities that are specifically designed to prevent poor quality including coding errors, design errors, mistakes in the user manuals, as well as badly documented or unmaintainable code.
  • Appraisal – Costs of activities designed to find quality problems, such as code inspections and any type of testing.
  • Internal Failure – Failure costs that arise before your company supplies its product to the customer. If a bug blocks someone in your company from doing their job, the costs of the wasted time, the missed milestones, and the overtime to get back onto schedule are all internal failure costs.
  • External Failure – Failure costs that arise after your company supplies the product to the customer, such as customer service costs, or the cost of patching a released product and distributing the patch. External failure costs can be huge. It is much cheaper to fix problems before shipping the defective product to customers.

Total Cost of Quality: The sum of the costs: Prevention + Appraisal + Internal Failure + External Failure.

How Testing Fits In

As discussed in “Test Throughout the Development Lifecycle” testing is much more than just finding bugs to squash. It is not an event, but a set of diverse activities capable of playing a critical role in identifying problems of varied types throughout the project lifecycle, far in advance of public access to the software.

Tracking the real costs of software failure such as patches, support, and rework can be difficult, but it is clear that effective testing can help optimize Cost of Quality. Performing reviews, thoughtful test planning, risk-based testing, and strategic automation on your software before it goes to market is a direct investment in the product.

An organization can reap significant returns through investment in Prevention and Appraisal activities – such as QA and Test.

Outsourcing as a Tool for Improving Quality

Put this question to your in-house team and contract service providers:

  • How are they helping you keep and satisfy your current customers and attract more?

Put these questions to yourself:

  • How would you know if your current Cost of Quality could be reduced by 20% through a handful of “Quick Wins”?
  • How would you know if you could get more quality out of your current project budgets?
  • How would you know if your current testing solution can do more for you?

In “Roadmap To Successful Outsourcing”, Wolfgang Strigel notes that outsourcing parts of software development and maintenance activities is becoming a competitive imperative and that according to research by the Gartner Group, 75% of all Information Technology (IT) companies will outsource parts of their IT efforts by 2003.

In his paper, Wolfgang Strigel also points out that outsourcing is a tool in an organization’s toolbox that allows one to:

  • Realize significant cost savings with respect to resources. Companies can take advantage of global resources to cut costs while retaining the permanent foundation of expertise in those capabilities that represent their competitive differentiation.
  • Use their core staff for strategic work while contracting out other activities. This improves company focus and increases margins by using specialty knowledge only when needed, and avoiding make-work projects for in-house staff who are suddenly not on the critical path.
  • Shorten delivery cycles and get quick reaction and greater flexibility to ramp resources up or down as project needs and market conditions demand.
  • Access the experts in a Just-In-Time fashion through a reliable pool of freelance or outsource contractors. Increasing complexity and specialization of new technologies make it difficult for companies to have in-house expertise in all areas.

Making Outsourcing Work

Building long-term relationships with a service provider does not mean anything as dramatic as a switch from the current solution to a new one. Serious vendors of contract services will want to participate in an evolution and growth of services that are backed-up with proof by performance while working within and improving the organization’s current project process framework.

Starting with a sample or “pilot project” of a controlled size and scope will allow you to measure the results and judge the potential value of the vendor for the future.

Posted in  All, Planning for Quality | Tagged , , , | Comments Off on Continuous Quality Improvement and Outsourcing