Confidence’s Role in Software Testing

Cofindence's Role in Software Testing

Confidence – “the feeling or belief that one can rely on someone or something; firm trust.” https://en.oxforddictionaries.com/definition/us/confidence

A few weeks ago I sat down to write about verifying bug fixes. I wanted to determine if there was a systematic approach we could follow when performing this activity. When exploring this approach I quickly realized confidence’s crucial role in verifying or signing off on any software we test.

Confidence dictates how much testing we feel we need to execute before we can sign off on anything we test. Our current confidence in our development team directly impacts how much test time we will take in order to feel our software is ready for sign off. The historical quality coming out of the development team dictates this level of confidence.

High Confidence – Just the right amount of testing is executed ensuring software can be signed off. (Note: This does not apply to mission critical software systems.)

Low Confidence – Based on historically bad code quality testers may over test even when code quality is good.

I believe this confidence level is very impactful to the speed in which we develop software. We might hear “QA is a bottleneck” but this is potentially due to historically low quality code causing testers to over test even when good quality code is being verified.

To illustrate this point further see the approach below I came up with to test and ultimately verify bug fixes.

Example: A Mobile App Which Requires Users to Login

Imagine we have a mobile app which requires users to login.

The fictitious bug we will be verifying is the following:

Title: Login Screen – App crashes after tapping login button.

Preconditions:

  • App is freshly installed.

Steps to Reproduce:

  1. Launch the app and then proceed to the login screen.
  2. Enter a valid existing email and password.
  3. Tap the “Login” button.

Result:

  • App crashes.

Before Verification Begins

Once a bug is marked fixed it’s important we gain more understanding about it before starting to verify its fix. To do this we ask the following questions of the developer who implemented the fix:

  • What was the underlying issue?
  • What caused this issue?
  • How was the issue fixed?
  • What other areas of the software could be impacted with this change?
  • What file was changed?
  • How confident is the developer in the fix? Do they seem certain? Even this can somewhat impact how we test.

* Special Note: Remember we need to gain context from a developer but as a tester you’re not taking direction on exactly what to verify. This is your role as a tester. Of course if a developer suggests testing something in a certain way you can but it’s your role as an experienced tester to use your mind to test a fix.

Now that we have gained a full understanding of how the bug was fixed let us start by verifying at the primary fault point (Exact steps listed in the original bug write up). Below are the high level verification/test ideas starting from very specific checks working outwards like layers of an onion. Notice as we execute more tests and move away from the primary fault point our confidence level in the fix is increasing.

Test Pass 1

  • Exact Software State: Follow exact “Preconditions”. In this case “App is freshly installed”.
  • Exact Input: Following exact steps listed in bugs “Steps to Reproduce”.
  • Verify app no longer crashes.
  • We could stop here but we would not have full confidence that the bug is fully fixed and that we haven’t introduced new knock-on bugs.

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 2

  • Varied State: App is not freshly installed but user is logged out.
  • Exact Input: Following exact steps listed in bugs “Steps to Reproduce”
  • Verify app does not crash

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 3

  • Varying State – After logging out/After restarting app and clearing app data.
  • Varying Input – Missing credentials/Invalid credentials
  • Verify no unexpected behavior

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 4

Test features/functions around login such as:

  • Forgot Password
  • Sign Up

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 5

Moving one final layer away from this fix we enter a phase of testing which includes more outside the box type tests such as: (Note: I love this type of testing as it’s very creative)

  • Interruption testing – Placing app into the background directly after tapping the login button.
  • Network fluctuations – Altering connection while login is taking place.
  • Timing issues – Running around interacting with UI elements at an unnatural speed. Example – Rapidly tapping the login button then back button then login button.

At this point our historic confidence plays a role in whether we continue to test or we feel the bug is fixed. If QA’s confidence is low we could end up spending too much time testing in this final test pass with little to show for our efforts.

How is Confidence Lowered?

  • Initial code quality signed of by development is low. As testers when we begin testing a fix which has been signed off as ready for testing, we will often gauge its quality based on how quickly we discover a bug which will need fixing.
  • Repeated low quality deliveries out of development can make testers correctly over test because it’s necessary. If bugs are found routinely very quickly in software we test naturally we are skittish in signing off future high quality work.

This can lead to over testing even when code quality is delivered in a high quality state. This over testing won’t provide anything of value. Don’t get me wrong you will find bugs but they might end up being more nice to know about then must fix issues. All software releases have bugs. It’s our job to identify high value defects which threaten the quality of our solutions.

How Can We Boost Our Confidence?

I believe we can’t perform “just-right” testing unless our confidence in our development teams is reasonably high. We need to make sure baseline quality is established before any “just-right” manual testing can take place. How do we do this?

  1. Test automation is a perfect mechanism to establish a quality baseline. “Checking” to ensure all basic functions are working as expected.
  2. Shift left into the trench and work with developers as they are implementing a feature so you can ensure initial quality out of development is higher.
  3. Measure your testing efforts to ensure you’re not over testing. Learn to know that sweet spot of just enough testing.
  4. Expose low quality areas – Retrospectives are ideal places to bring up quality issues with the larger team. Let them know you don’t have confidence and need something to change to boost it back up.
  5. Slow down – Oh no we can’t do that right? Yes we can and should slow down if our confidence is low.

If you hear things like “QA is a bottleneck” in your organization you might want to look at the code quality historically coming out of your development team. It’s possible your QA groups are testing endlessly because they lack confidence in the work coming from the development team and they have to test further. It can be difficult for QA to shift or stop testing given the negative track record or low confidence in their teams.

If your code quality is poor, QA’s confidence in Development will be low, and then QA will always be a bottleneck.

Think about it 🙂

Posted in  All, Business of Testing, Planning for Quality | Tagged , , , , , , | Comments Off on Confidence’s Role in Software Testing

Beyond the Agile Testing Quadrants

You can build the right product and you can build it right, and still not deliver value to the customer/user. For any number of reasons, they don’t adopt it easily, completely, or on time.

You can blame them. Luddites.

You can  blame others. Microsoft.

You can blame yourself. Incompetent. Need a hairshirt.

Or you can evolve your testing to include user adoption and benefits realization. Ask questions aimed at finding user adoption issues and explore risks to benefits realization.

I remember reading a Twitter post from Joshua Kerievsky about usage data and how evolving a product without it now seemed so inadequate once he/they started using it. In my mind, this extends the agile testing quadrants to finding adoption issues in addition to helping discover the right product.

A/B testing is another example. This is using real customer behaviour to guide the evolution of the product. Imagine what a tester could do – with their skills tuned towards asking the important questions at the right time – to this field. I think it’s filled with opportunity.

In the enterprise domain (where I spend all my time) the agile testing quadrants are not enough. You can follow them bravely and assuredly and still not deliver value because of:

  • an inadequate end-user communications plan
  • an inadequate end-user training program, or having a gap between training and when the end-users need to start using the solution
  • using the same testers in every test cycle or continuously, making them more expert and less skilled in finding adoption issues
  • not translating pre-release performance testing to post-release performance monitoring

Consequently, because I love the agile testing quadrants for getting the right product built the right way, I thought to extend the idea to include supporting adoption and supporting benefits realization.

Beyond Agile Testing Quadrants

Q1: Discover barriers to solution adoption by the support/sustainment/operations group(s). For example, assess the communications and training for support/sustainment personnel.

Q2: Discover barriers to solution adoption by end-users/customers. For example, assess the communications and training for end-users/customers, use A/B testing to determine/optimize features/interactions the customer cares about.

Q3: Discover risks to benefits realization that originate with the end-users/customers using the solution. For example, monitor the benefits to see if there is progress towards their realization.

Q4: Discover risks to benefits realization that originate in support/sustainment/operations. For example, monitor the solution performance on an on-going basis so that performance bottlenecks don’t become barriers to effective usage that would in turn lead to the planned benefits being realized (already a common practice).

It’s a start. I almost put the agile testing quadrants themselves in Q2 and I might yet put them back there. For now, I’m hoping this conveys the way that testers need to start thinking:

  • write a test strategy that lasts forever, not merely to the end of the project and/or release; deployment is not done
  • create checklists that the solution delivery team can evolve forever, not to just get the solution shipped/in production but one they can use to continually increase the likelihood of user adoption and benefits realization; deployment is not done
  • explore barriers to solution adoption; expand the list of test targets to include end-user communications and training; deployment is not done
  • explore risks and barriers to benefits being realized; deployment is not done
  • do this continually; deployment still isn’t done

Maybe you detected the theme that deployment is not done in the above points.

You can’t afford to think like this if there are product defects that decrease the likelihood of user adoption. You’ll be too busy writing bug reports. So yeah, this is complementary to agile testing (and the agile testing quadrants concept). These new quadrants are the next step, at least in the enterprise.

Posted in  All, Agile Testing, Planning for Quality, Test Planning & Strategy | Tagged , , , , , , | Comments Off on Beyond the Agile Testing Quadrants

Accessibility Testing: Four Tips for Doing It Right

If you are feeling a little overwhelmed by the extra effort involved in delivering accessible software, don’t be dismayed. Here are some helpful tips to keep in mind.

1. Embed Accessibility Testing

The purpose of the first round of guideline verification is to document defects and create a backlog of issues that need to be addressed. By embedding the accessibility testers in the project team, you will have the benefit of seeing the burndown of their work on a daily basis, and you’ll get that information to the team in the most efficient way possible. The quicker the information flow, the more time to resolve the issues.

2. A Bug Is a Bug

The defects that come as a result of the guideline verification should be triaged the same way as all other issues your team encounters. There is a tendency to treat accessibility issues differently, but resist the urge—a bug is a bug.

If there is a sound reason to separate them for reporting purposes, and if you have the ability to configure your defect management tool, create a category titled Accessibility and include an option to designate the severity, which could be correlated with the impact on Level A, AA, or AAA compliance.

3. Managing Defects

All defects should have a priority classification. If an accessibility defect is not serious enough to affect your level of conformance, fixing it can wait.

Depending on how many accessibility defects are reported during guideline verification, your product owner may want the ability to run a separate sprint to focus on accessibility. If the accessibility defects are prolific, consider handling them the same way your organization handles technical debt.

Once your teams understand that conformant code is required and how to implement coding practices that support accessibility, consider including the verification as part of your “done” definition.

4. The Accessibility Statement

The best way to tell your users you have incorporated accessibility features is an accessibility statement. The statement exists not just to tell users at which level of conformity your site has been verified, but also to let them know that you’re committed to providing a great experience for all users.

During initial verification, your product may not conform to its intended level. The accessibility statement also allows you to be transparent with what you’re doing to address known defects.

You might find the World Wide Web Consortium’s Web Content Accessibility Guidelines and this accessibility statement generator site helpful as you prepare your own accessibility statement. Keep in mind that it should include:

  • The level of conformity to which it was tested (Level A, AA, AAA, or other)
  • The level of conformity to which it complies
  • The exceptions (defects) preventing it from conforming to its intended level
  • Contact information or steps to report accessibility issues

These tips will allow for an efficient and long-term accessibility testing initiative and result in a happy experience for all users.

For related reading, check out:

 

Posted in  All, Other, Planning for Quality, Test Planning & Strategy | Tagged , , , , , , | Comments Off on Accessibility Testing: Four Tips for Doing It Right

Better Test Reporting – Data-Driven Storytelling

Testers have a lot of project “health” data at their finger tips – data collected from others in order to perform testing and data generated from testing itself. And, sometimes test reporting gets stuck on simply communicating this data, these facts. But, if we simply report the facts without an accompanying story to give context and meaning, there is no insight – insight needed to make decisions.

Better Test Reporting - Data to Information to Insight

With all the data we have close to hand, testing is in a great position to integrate data-driven storytelling into the various mediums of our test reporting.

“Stories package information into a structure that is easily remembered which is important in many collaborative scenarios when an analyst is not the same person as the one who makes decisions, or simply needs to share information with peers.” – Jim Stikeleather, The Three Elements of Successful Data Visualizations

“No matter how impressive your analysis is, or how high-quality your data are, you’re not going to compel change unless the stakeholders for your work understand what you have done. That may require a visual story or a narrative one, but it does require a story.” – Tom Davenport, Why Data Storytelling Is So Important—And Why We’re So Bad At It

This enhanced reporting would better support the stakeholders with relevant, curated information that they need to make the decisions necessary for the success of the project, and the business as a whole.

Not Your Typical Test Report…Please!

When thinking of test reporting, perhaps we think of a weekly status report or of a real-time project dashboard?

Often, these types of reporting tend to emphasize tables of numbers and simple charts and rarely contain any contextual story. Eg: time to do the test report: let me run a few queries on the bug database and update a list/table/graph, or two.

We need to thoughtfully consider:

  • What information should our test reporting include?
  • What questions should it really be answering?
  • What message is it supposed to be delivering?

If we answered the following questions with just data, would we gain any real insights?

Question Data Provided
Is testing progressing as expected? # of test cases written
Do we have good quality? # of open bugs
Are we ready for release? # of test cases run

Obviously, these answers are far too limited, and that is the point. Any single fact, or collection of standalone facts, will be typically insufficient to let us reasonably make a decision that has the true success of the project at heart. [Ref: Metrics – Thinking In N-Dimensions]

To find connections and enable insights, first think about what audience(s) we could support with our data in terms of these broad core questions:

  • How are we doing? (Status)
  • What has gone wrong? (Issues)
  • What could go wrong? (Risks)
  • How can we improve?

Then we tailor our data-driven storytelling with a message for each audience to facilitate insight that will be specifically of value to them.

Test Reporting: Data vs. Information

An important distinction to make when thinking about increasing the value of test reporting is the difference between data and information:

  • Data: Data can be defined as a representation of facts, concepts or instructions in a formalized manner which should be suitable for communication, interpretation, or processing by human or electronic machine.
  • Information: Information is organised or classified data which has some meaningful values for the receiver. Information is the processed data on which decisions and actions are based.

Computer – Data and Information, Tutorials Point

Data is not information – yet. Data is the building blocks we construct information from. When we transform data, through analysis and interpretation, into information that we make consumable for the target audience, we are dramatically increasing the usefulness of that data.

For example:

“Here is real-time satellite imagery of cloud cover for our province…”
“Look at all those clouds coming!”
versus…
“This is a prediction that our city will get heavy snowfall starting at about 8:30pm tomorrow night…”
“We better go buy groceries and a snow shovel!”

Or in the case of testing:

“Here is a listing of all the bugs found by module with the date found and a link to the associated release notes…”
“That is a lot of bugs!”
versus…
“This analysis seems to show that each time Module Y was modified as part of a release the bug count tended to spike…”
“Let’s have someone look into that!”

Through consumable information, we can help provide the opportunity for insights, but information is not insight itself. The audience has to “see” the insight within the information. We can only try to present the information (via whatever mediums) in a way we hope will encourage these realizations, for ourselves and others.

From Data to Decision

Once data is analyzed for trends, for correlations with other data, etc.; plans, choices, and decisions can be made with this information.

The following illustrates the path data takes to informing decisions:

Better Test Reporting - Data Path to Decision-Making

Figure 1: Test Reporting Data Path to Decisions

What data are we collecting and why should be firmly thought out. And then, don’t just report the numbers. Look at each testing activity and see how it can generate information that is useful and practical as input to the decisions that need to be made throughout the project.

  1. Data: <we’ll come back to this>
  2. Consumable Information: Testing takes the collected data and analyzes it for trends, correlations, etc. and reports it in a consumable manner to the target audience(s).
  3. Proposed Options: The data-driven story provided is then used to produce recommendations, options, and/or next steps for consideration by stakeholders.
  4. Discuss & Challenge: The proposed options are circulated to the stakeholders and through review and discussion, plans can be challenged and negotiated.
  5. Feedback Loop: These discussions and challenges will likely lead to questions and the need for clarifications and additional context, which can then send the process back to the datastore.
  6. Decisions Made: Once agreements are reached and the plans have been finalized, decisions have been made.

Of course, testing is not the sole party involved in driving this process. Testing’s specific involvement could stop at any step. However, instead of always stopping at step one with 1-dimensional test reporting, testing could make use of the data collected to move further along the path and to tell a more meaning-filled multi-dimensional story to a more diverse audience of stakeholders, more often.

Better Data – Better Decisions

In this way, the function of test reporting can be helping the project much more than it would when just reporting “there are 7 severe bugs still open”.

This is because our choices typically are not binary. We do not decide:

  • Do we fix all the bugs we find?
  • Do we find bugs or prevent bugs?
  • Do we automate all the testing?
  • Do we write a unit test for everything?

We decide to what degree we will do an activity. We decide how much should we be investing into a given activity or practice or tool.

This is where the first item in the list just above, data, comes in. Data lets us find out what trade-offs with other project investments we will have to make to gain new benefits. Data is the raw material that leads to insight.

So, in order to have “better test reporting” we need to make sure that we know what we need insight about, collect the supporting data accordingly, report the data-driven story, and then follow the path to better decision-making.

Better Data
Better Information
Better Decisions

For related reading, check out these articles:

Posted in  All, Other, Planning for Quality, Test Planning & Strategy | Tagged , , , , , , , , | Comments Off on Better Test Reporting – Data-Driven Storytelling

Augmenting Testing in your Agile Team: A Success Story

One of the facts of life about Agile is that remote resources, when you have a mostly collocated team, generally end up feeling a little left out in the cold.  Yet, with appropriately leveraged tools, sufficient facilitation, management support and strong team buy-in, it can end up being a very successful arrangement.

Augmenting Testing in your Agile Team: A team with remote contributors

Figure 1: A team with remote contributors

There is an implementation model that lends itself more naturally to adding testing resources, or a testing team, to your delivery life cycle.  Rather than embedding your resources, you can find ways to work with the teams in parallel, augmenting their capabilities and efforts in order to achieve greater success.   In this article, we’ll look at a particular case where PQA Testing implemented an augmenting strategy to tackle regression and System Integration Testing (SIT).

Recently we were working with a company that delivers a complex product in retail management to assorted third party vendors.  Features were created, tested and marked ready for release by functionally targeted Agile teams.  Coming out of a sprint wasn’t the last step before a feature was released, however.  Due to the complexity of the product, environments, other systems controlled directly by the third party vendors and other systems controlled indirectly through their third party vendors, System Integration Test (SIT) cycles and User Acceptance Test (UAT) cycles were necessary.

The original intent, when our client went Agile, was to be able to continue to support these relationships through the Agile teams.  What soon became evident was that the amount of regression testing in the SIT environments required for the new features was overwhelming to the testing resources dedicated to a feature team.

Augmenting Testing in your Agile Team: A mixed team with internal and external resources

Figure 2: A mixed team with internal and external resources

Additionally, as multiple environments and numerous stakeholders from various vendors with their own environments were introduced, simple communication, coordination of environments and testing became much more complex and time consuming.  Defects that were found in SIT testing needed to be triaged and coordinated with the other issues created from other vendors, and then tracked as they moved their way through the different teams and vendors to their solution.

As the testing resources on each team focused more on their functional area, their knowledge became more and more specialized and they were no longer the “go-to” resource for questions that might span the entire domain. With this specialization, testers were no longer collecting as much domain knowledge. Additionally, while automation was an integrated part of the company’s solution, test automators were also embedded in the Agile teams.  This changed the focus of automation; it slowly drifted away from providing benefits at the end-to-end integration testing level.

When we began the engagement with this client, they were succeeding from release-to-release, but not at optimum levels of quality, or to vendor satisfaction.   They were borrowing resources from multiple Agile teams and sometimes breaking sprints to ensure that the release could get through the SIT cycle within the specified time frame.  As we do on every PQA Testing engagement, we began by learning the existing process, how the software worked, and about the entire domain.  Before long, we took over regression testing for the releases.  Our focus then became to make sure that the existing functionality remained stable and clean, and that the new features integrated into the system well.

The testing team is now a separate team that is semi-integrated with the existing teams.  We transition knowledge back and forth, but there is a distinction in responsibilities between new features and regression and SIT testing.   As we began taking over these testing responsibilities, we also began to take over communication and facilitation between the core vendor and our client for release and testing.  An automation resource is also able to work through the tests from the big-picture integration perspective, and is reducing the amount of manual testing that is necessary.  Increasing our documented domain knowledge is making it easier to scale the team as necessary during busy times and releases.

Augmenting Testing in your Agile Team: An internal team augmented with a remote team

Figure 3: An internal team augmented with a remote team

Taking over these requirements with a dedicated team has greatly improved the feedback coming from the vendors.  The Agile teams have more focus on their core deliverables.  Integrating remotely with the client’s teams has worked well because we don’t have to constantly interact face-to-face to show value in our work.  We are simply another team trying to move the ball forward for the company, just like everyone else.

Remote testing teams dedicated to ownership of specific testing functions can remove many of the obstacles of testing remotely in an Agile environment and, in this case, better ensure quality for the end user.

Posted in  All, Agile Testing, Business of Testing | Tagged , , , , , | Comments Off on Augmenting Testing in your Agile Team: A Success Story

8 Test Automation Tips for Project Managers

8 Test Automation TipsSoftware testing has always faced large volumes of work and short timeframes. To get the most value for your testing dollars, test automation is typically a critical component. However, many teams have attempted to add test automation to their projects with mixed results.

To help increase the likelihood of success, the approach to automation must be from the practical perspective that automating testing, effectively, is not easy.

Here are 8 test automation tips for project managers.

1. Decide Your Test Automation Objectives Early

Automation is a method of testing, not a type. Therefore automation should be applied to those tests from the overall test plan where there is a clear benefit to do so. Before starting, ensure that the benefits of test automation match with your objectives. For example, do you want to:

  • Discover defects earlier?
  • Increase test availability (rapid and unattended)?
  • Extend test capability and coverage?
  • Free-up manual testers?

2. Carefully Select your Test Automation Tools / Languages

There are many options and possible combinations of tools and scripting languages. Take some time to review the options and find the best fit for your project: confirm the technology fits with your project, look for a skill requirement match with your team, check that you can integrate with your test management and defect tracking tools, etc. Then try before you buy, eg: perform a proof of concept, perhaps using your smoke tests.

3. Control Scope and Manage Expectations

When starting a new test automation effort, there is often the tendency to jump in and immediately start automating test cases. To avoid this pitfall, it is important to treat the automation effort as a real project in and of itself.

  • Derive requirements from the objectives
  • Ensure the scope is achievable
  • Define an implementation plan (linked to milestones of the actual project)
  • Secure resources and infrastructure
  • Track it

Not only will this help ensure the success of the effort, but it will allow you to communicate with other stakeholders what will be automated, how long it will take, and the short and long-term benefits that are expected.

4. Use an Agile Approach

Following an Agile approach, you can roll-out your test automation rapidly in useful pieces; making progress visible and benefits accessible as early as possible. This will give you the ability to validate your approaches while demonstrating the value of the test automation in a tight feedback cycle.

5. Scripts are Software

You are writing code. The same good practices that you follow on the actual project should be followed here: coding standards, version control, modular data-driven architecture, error handling and recovery, etc. And, like any other code, it needs to be reviewed and tested.

6. Use Well Designed Test Cases and Test Data

Garbage in, garbage out. Make sure you have a set of test cases that have been carefully selected to best address your objectives. It is important to design these test cases using reusable modules or building-blocks that can be leveraged across the various scenarios. Additionally, these test cases should be documented in a standardized way to make them easier to add to the automated test suite. This is especially important if you envision using non-technical testers or business users to add tests to the repository, using a keyword driven or similar approach to your automation.

7. Get the Test Results

Providing test results and defect reports quickly is the most important reason for test automation. Each time you need to run the automated tests, you are reaping the benefits that automation provides. For example, running the test automation in its own environment as part of the continuous integration process will detect any issues related to the automated test cases for the application under test as soon as features and fixes are checked in.

8. Maintain and Enhance

Investing in automation requires a significant commitment in the short-term and the long-term for there to be maximum success. For as long as the product that is being automated is maintained and enhanced, the automation suite should be similarly maintained and enhanced. If the test automation solution is well-designed and kept up-to-date with a set of useful tests, it will provide value for years.

Posted in  All, Automation & Tools, Planning for Quality | Tagged , , , , , , , , , | Comments Off on 8 Test Automation Tips for Project Managers

Software Testing Guiding Principles

All effective test teams typically have well defined processes, appropriate tools and resources with a variety of skills. However, teams cannot be successful if they place 100% dependency on the documented processes, as doing so leads to conflicts. Especially when testers use these processes as ‘shields’ or ‘crutches’.

Software Testing Guiding PrinciplesTo be successful, test teams need to leverage their processes as tools towards becoming “IT” teams. And by “IT” I do not mean Internet Technology.

IT (Intelligent Testing) teams apply guiding
principles to ensure that the most cost effective
test solution is provided at all times

This posting provides a look into the “guiding principles” I’ve found useful at helping testers I’ve worked with to become highly effective and valued as part of a product development organization.

Attitude is Everything

The success you experience as a tester depends 100% on your attitude.

A non-collaborative attitude will lead to
conflict, limit the success of the test team and
ultimately undermine the success of the
entire organization.

Testers must:

  • Learn to recognize challenges being faced by the team and to work collaboratively to solve problems
  • As stated by Steve Covey – “Think Win-Win
  • Lead by example and inspire others. A collaborative attitude will pay dividends and improve the working relationship for the entire organization, especially when the team is stressed and under pressure.

Quality is Job # 1

This one borrowed from Ford Motor Company.

Testing, also known as Quality Control, exists to implement an organizations Quality Assurance Program. As such, testers are seen as the “last line of defense” and play a vital role in the success of the business.

Poor quality leads to unhappy customers and eventually the loss of those customers, which then adversely impacts business revenue.

Testers are ultimately focused on ensuring the
positive experience of the customer using the
product or service.

Communication is King

Testers should strive to be superior communicators, as ineffective communications leads to confusion and reflects poorly on the entire team.

The test team will be judged by the quality of their work, which comes in the form of:

  • Test Plans
  • Test Cases
  • Defect Reports
  • Status Reports
  • Emails
  • Presentations

Learn how to communicate clearly, concisely
and completely.

Know Your Customer

Like it, or not, testing is ‘service based’ and delivers services related to the organizations Quality Assurance Program. For example: test planning, preparation and execution services on behalf of an R&D team (i.e. internal customer).

Understanding the needs and priorities of the
internal customer will help to ensure a positive
and successful test engagement.

Test Engineering also represents the external customer (i.e. user of the product / service being developed). Understanding the external customer will help to improve the quality of the testing and, ultimately, quality of the product.

Without understanding the external customer
it is not possible to effectively plan and implement
a cost effective testing program.

Ambiguity is Our Enemy

This basically means “Never Assume” and clarify whenever there is uncertainty.

Making assumptions about how a products features / functionality, schedules, etc function will lead to a variety of issues:

  • Missed expectations
  • Test escapes – Customer Reported Defects
  • Reflect poorly on the professionalism of the Test Engineering team

Testers must avoid ambiguity in the documentation that they create so as to not confuse others.

Data! Data! Data!

Test teams ‘live and breath’ data. They consume data and they create data.

Data provided from other teams is used to make intelligent decisions:

  • Requirements
  • Specifications
  • Schemas
  • Schedules
  • Etc

Data generated by the test program is used to assist with making decisions on the quality of the product:

  • Requirements coverage
  • Testing progress
  • Defect status
  • Defect arrival / closure rates

The fidelity and timeliness of the data collected
is critical to the success of the entire
organization.

Trust Facts – Question Assumptions

Related to principle having to do with avoiding ambiguity, test teams must never make assumptions. As doing so can have a significant impact on the entire business.

Testers must:

  • Work with the cross-functional team to address issues with requirements, user stories, etc
  • Clarify schedules / expectations when in doubt
  • Leverage test documentation (e.g. Test Plan) to articulate and set expectations with respect to the test program
  • Track / manage outstanding issues until they are resolved

Be as ‘surgical’ as necessary to ensure quality
issues are not propagated to later phases of
the product life-cycle

Think Innovation

Regardless of the role you play, every member of the test team can make a difference.

  • Improvement ideas should be socialized, shared and investigated
  • Small changes can make a huge difference to the team and the organization

Innovation that can benefit the Test or Quality Assurance Program are always welcome.

  • Tweaks to processes, templates, workflows
  • Enhancements to tools
  • Advancements in automation techniques, tools, etc

Remember, the team is always looking for ways to increase effectiveness and make the most out of the limited Test Engineering budget

Strive to be “Solution Oriented”

Process for Structure – Not Restrictions

Some will say “What do you mean process do not restrict”. On the surface it may appear as if process does in fact restrict the team; however, if you dig deeper you will discover that documented processes help by:

  • Improving communications through establishing consistency between deliverables and interactions between teams
  • Making it clear to all ‘stakeholders’ what to expect at any given point of time in the product life-cycle
  • Providing tools that can be used to train new members of the team

Documented process are not intended to limit
creativity. If the process is not working –
Change the Process

  • Augment existing templates if it will enhance the value of the testing program; however, be sure to follow appropriate Change Management processes when introducing an update that may impact large numbers of people.
  • Document and obtain approvals for deviations/exceptions if the value of completing certain aspects of the process has been assessed as non-essential for a program / project.

Plan Wisely

A well thought out and documented plan is worth its weight in gold. The documented plan is the primary tool used to set expectations by all the stakeholders.

“If you fail to plan you plan to fail”

Plan as if the money you are spending is your own. There is a limited budget for testing and it is your responsibility to ensure the effectiveness of the Test Program such that is provides the highest ROI (Return on Investment).

Identify Priorities

Make “First Things First” (Steven Covey)

Unless you are absolutely clear on the the priorities it will not be possible to effectively plan and / or execute a successful Test Program.

It is not possible for an individual, or team, to have two number one priorities.  Although it is possible to make progress on multiple initiatives it is not possible for an individual to complete multiple initiatives at the exact same time. Schedules, milestones, capacity plans, etc should all reflect the priorities.

Always ensure priorities are in alignment with
the expectations of all stakeholders

At the end of the day the most important Software Test Principle is “If you do not know – ASK”. Testers are expected to ask questions until they are confident that they have the information needed to effectively plan, prepare and execute an effective Test Program.

Just remember, unanswered questions contribute to ambiguity and add risk to the business.

Posted in  All, Business of Testing, Planning for Quality | Tagged , , , , , , | Comments Off on Software Testing Guiding Principles

Testing COTS Systems? Make Evaluation Count

Over the years, I have been involved in a number of projects testing COTS (Commercial-Off-The-Shelf) systems across a range of industries. Sometimes the project was with the vendor and sometimes with the customer. When it came to supporting a company undertaking a COTS system implementation, I always appreciated the benefits that came with a “quality” evaluation.

When such an evaluation is conducted in a thoughtful manner, a lot of ramp-up, preparation, AND testing can be shifted to the left (Ref: New Project? Shift-Left to Start Right!) making the overall selection process that much more likely to find the “best-fit” COTS system.

Implementing COTS Systems Costly; Mitigate Your Risks

COTS systems are a common consideration for most enterprise organizations when planning their IT strategy around ERP, CMS, CRM, HRIS, BI, etc. Rarely will an organization build such a substantial software system from scratch if there is a viable alternative.

However, unlike software products that we can just install and start using right out-of-the-box, these COTS systems must typically undergo configuration, customization and/or extension before they will meet the full business needs of the end-user. This can get expensive.

As such, implementation necessarily requires a strong business case to justify the level of investment involved. Anything that impairs the selection and implementation of the best-fit COTS system will put that business case at risk.

Earlier involvement of testing can be key to mitigating risk to the business case with respect to the following challenges.

A COTS System is a Very Dark “Black Box”

Having to treat an application as complex as the typical COTS system like a black box is a significant challenge.

When we conduct black box testing for a system that we have built in-house, we have requirements, insights to the architecture and design, and access to the developers’ knowledge of their code. We can get input as to what are the risky areas, and where there is tighter coupling or business logic complexity. We can even ask for testability improvements.

When we are testing COTS systems, we don’t have any of that. The only requirements are the user manuals, the insights come from tidbits gleaned from the vendor and their trainers, and we don’t have access to the developers or even experienced users. It is a much darker black box that conceals significant risk.

Testing COTS Systems - A Black Box in the Application EcosystemFig 1: Testing COTS Systems – A Black Box in the Application Ecosystem

Additionally, not all the testing can be done by manually poking around in the GUI. Testing COTS systems involves a great amount of testing how the COTS system communicates with other systems and data sources via its interfaces.

Also, consider the data required. As Virginia Reynolds comments in Managing COTS Test Efforts, In Three Parts, when testing COTS systems “it’s all-data, all the time.” In addition to using data as part of functional and non-functional testing, specific testing of data migration, flow, integrity, and security is critical.

Leaving the majority of testing such a system until late in the implementation process and, possibly, primarily as part of user acceptance by business users, will be very risky to the organization.

Claims Should Be Verified

When we create a piece of software in-house or even if we contract another party to write it for us, we control the code. We can change it, update it, and contract a different 3rd party to extend it if and when we feel like it. With COTS systems, the vendor owns the code and they are always actively working on it. They are continually upgrading and enhancing the software.

As we know from our own testing efforts, there isn’t time to test everything, or to fix everything. That means, the vendor will have made choices and trade-offs with respect to the features and the quality of the system they are selling to us, and all their customers.

Of course, it is reasonable to expect that the vendor will test their core functionality, or the “vanilla” configuration of their system. They would not remain in business long if they did not. But, to depend on the assumption that what the vendor considers to be “quality” is the same as what we consider to be “quality”, is asking for trouble.

“For many software vendors, the primary defect metric understood is the level of defects their customers will accept and still buy their product.” Randall Rice, Testing COTS-Based Applications

Even if we trust the vendor and their claims, remember they are not testing in our specific context, eg: meeting our functional and quality requirements when the COTS system is configured to our specific business processes and integrated with our application ecosystem. (Ref: To Test or Not to Test?)

Vanilla is Not the Flavour of Your Business

The vendor of the COTS system is not making their product for us, at least not just for us. They are making their system for the market/industry that our business is a part of.

As each customer has their own specific way of doing business, it is very unlikely that we would take a COTS system and implement it straight out-of-the-box in its “vanilla” configuration. And though we may be “in the industry” that the COTS system is intended to provide a solution for, there will always need to be some tweaking and some gluing.

The COTS system will need to be configured, customized and/or extended before it is ready to be used by the business. And, because of the lack of insight and experience with the system, the impact of any such changes will not be well understood – a risk to implementation.

COTS Systems Must “Play Nice”

Testing COTS systems comes in two major pieces; testing the configured COTS system itself, and testing the COTS system together with its upstream and downstream applications.

Many of the business’ work processes will span multiple applications and we need to look for overall system level incompatibilities and competing demands on system resources. Issues related to reliability, performance, and security can often go unnoticed until the overall system is integrated together.

And when there is an issue, it can be very difficult to isolate the source of the error if the problem is resulting from the interaction of two of more applications. The difficulty in isolating any issues is further complicated when the applications involved are COTS systems (black boxes) from different vendors.

“Finding the actual source of the failure – or even re-creating the failure – can be quite complex and time-consuming, especially when the COTS system involves products from multiple vendors.” – Richard Bechtold, Efficient and Effective Testing of Multiple COTS-Intensive Systems

We need to have familiarity with the base COTS system in order to be able to isolate these sorts of integration issues more effectively, and especially to be able to confidently identify where the responsibility lies.

Testing COTS Systems during Evaluation

If there has been an honest effort to “do it right”, then a formal selection process will take place prior to implementation, one that goes beyond reading the different vendors’ websites and sales brochures. And in this case, testing can be involved earlier in the process.

Consider the three big blocks of a COTS deployment: Selection, Implementation, and Maintenance. The implementation phase is traditionally where all the action is, especially from the testing point of view.

But, we don’t want to be struggling in implementation with issues related to the challenges described above. We need to explore the COTS system’s functionality and its limits in the aspects of quality that are important to us before that point. Why find out about usability, performance, security model, and data model issues after selection? After all, moving release dates is usually quite costly.

“The quality of the software that is delivered for a COTS product depends on the supplier’s view of quality. For many vendors, the competition for rushing a new version to market is more important than delivering a high level of software reliability, usability, and other qualities.” – Judith A. Clapp, Audrey E. Taub, A Management Guide to Software Maintenance in COTS-Based Systems

If we get testing started early, we can be ramping up on this large, complex software system, reviewing requirements, documenting our important test cases, finding bugs and other issues, determining test environment and data needs, and identifying upstream and downstream application dependencies all before the big decision is made. Thereby, informing that decision while responsibly preparing for the inevitable implementation.

To realize these and other benefits, we can leverage testing and shift efforts to the left, away from the final deadline. We make testing an integral part of decision-making during evaluation.

Testing COTS Systems - Major Deployment StagesFig 2: Testing COTS Systems – Major Deployment Stages

We want to choose the right solution the first time with no big surprises after making that choice. This early involvement of testing, done efficiently, can help our implementation go that much more smoothly.

Multiple Streams of Evaluation Testing

When designing a new software system, there are many considerations around what it needs to do and what are the important quality characteristics. This is no different with a COTS system, except that it is already built. That functionality and those quality characteristics are already embedded in the system.

It would be great if there was a system that perfectly fit our needs right out-of-the-box, functionally and quality-wise. But that won’t be the case. The software was not built for us. There will be things about it that fit and don’t fit, things that we like and don’t like, and things that will be missing. This applies to our fit with the vendor as well.

Our evaluation must take the list of candidates that passed the non-technical screening and rapidly get to the point where we can say: “Yes, this is the best choice for us. This is the one we want to put effort into making work.”

In order to do that, we will need to:

  • Confirm vendor claims in terms of functionality, interfaces for up/down stream applications and DW/BI systems, configurability, compatibility, reporting, etc
  • Confirm suitability of the data model, the security model, and data security
  • Confirm compatibility with the overall system environment and dependent applications
  • Investigate the limits of quality in terms of the quality characteristics that are key to our business and users (eg: reliability, usability, performance, etc.)
  • Uncover bugs, undocumented features, and others issues in areas of the system that are business critical, popular/frequently used, and/or have complex/involved business processes

The evaluation will also need to include more than just the COTS system. The vendor should be evaluated on such things as organizational maturity, financial stability, customer service/support, quality of training/documentation, etc.

To do all of this efficiently, we can organize our evaluation testing into four streams of activity that we can execute in parallel, giving us a COTS selection process that can be illustrated at the high-level as follows:

Testing COTS Systems - Evaluation Testing in ParallelFig 3: Testing COTS Systems – Evaluation Testing in Parallel

As adapted from Timing the Testing of COTS Software Products, the streams of evaluation testing would focus on the following:

  • Functional Testing: the COTS systems are tested in isolation to learn and confirm the functional capabilities being provided by each candidate
  • Interoperability Testing: the COTS systems are tested to determine which candidate will best be able to co-exist in the overall application ecosystem
  • Non-Functional Testing: the COTS systems are tested to provide a quantitative assessment of the degree to which each candidate meets our requirements around the aspects of quality that are important to us
  • Management Evaluation: the COTS systems are evaluated on their less tangible aspects including such things as training, costs, vendor capability, etc.

Caveat: We don’t want to test each system to the same extent. We want to eliminate candidate COTS systems as rapidly as possible.

Rapidly Narrowing the Field

In order to eliminate candidate COTS systems as rapidly and efficiently as possible, we need a progressive filtering approach to applying the selection criteria. This approach will also ensure that the effort put into evaluating the candidate COTS systems is minimized overall.

Additionally, the requirements gathering and detailing can be conducted in a just-in-time (JIT) manner over the course of the entire selection phase rather than as a big bang effort at the beginning of implementation.

As an example, we could organize this progressive filtering approach into three phases or levels:

Testing COTS Systems - Progressively Filtering CandidatesFig 4: Testing COTS Systems – Progressively Filtering Candidates

Testing would scale up over the course of the three phases of evaluation, increasing in coverage, complexity, and formality as the number of systems being evaluated reduces.

The best-fit COTS system will be more confidently identified, and a number of important benefits generated, in the course of this process.

Testing with Benefits

With our efficient approach to involving testing during evaluation, we will not only be able to rapidly select the best option for the specific context of our company, but we will also be able to leverage the following additional benefits from our investment, as we move forward into implementation:

  • Requirements Captured: Requirements have been captured from the business and architecture, reviewed, and tested against
  • Stronger Fit-Gap Analysis: Missing functionality has been identified for inputting to implementation planning
  • Test Team Trained: The test team is trained up on the chosen COTS system and has practical experience testing it
  • Quality Baseline Established: Base aspects of the COTS system have already been tested, establishing a quality baseline
  • Development Prototypes Tested: Prototypes of “glue” code to interact with the interfaces and/or simulate other applications and ETL scripts for data migration have been developed, and have been tested
  • Test Artifacts Created: Reusable test artifacts, including test data, automated test drivers, and automated data loaders are retained for implementation testing
  • Test Infrastructure Identified: Needs around tools, infrastructure and data for testing have been enumerated for inputting to implementation planning
  • Bug Fixing: Bugs, undocumented features, and other issues related to the COTS system have been found and raised to the vendor prior to signing on the dotted line

Conclusion

In addition to uncovering issues early, involving testing during evaluation will establish a baseline of expected functional capability and overall quality before any customization and integration. This will be of great help when trying to isolate issues that come up in implementation.

“Vendors are much more likely to address customer concerns with missing or incomplete functionality as well as bugs in the software before they sign on the dotted line.” – Arlene Minkiewicz, 6 Steps to a Successful COTS Implementation

Most important of all, after this testing during evaluation, the implementation project can more reasonably be considered an enhancement of an existing system that we are now already familiar with. Therefore, we can more confidently focus our testing during implementation on where changes are made when configuring, customizing, extending, and integrating the COTS system, mitigating the risks associated specifically with those changes, while having confidence that the larger system has already been evaluated from a quality point of view.

With less surprises and problems during implementation, we should end up having to do less testing overall.

“The success of the entire development depends on an accurate understanding of the capabilities and limitations of the individual COTS. This dependency can be quantified by implementing a test suite that uncovers interoperability problems, as well as highlighting individual characteristics. These tests represent a formal evaluation of the COTS capabilities and, when considered within the overall system context can represent a major portion of subsystem testing.” – John C. Dean, Timing the Testing of COTS Software Products

With an approach such as this, we should be able to reduce candidate COTS system options faster, achieve a closer match to our needs, know earlier about fit-gaps and risks, capture our requirements more timely and completely, and spread out the demands on testing resources and environments – all of which should help us achieve a faster deployment and a more successful project.

Choose your COTS system wisely and you’ll save time and money… Make your evaluation count.

Posted in  All, Planning for Quality, Risk & Testing, Test Planning & Strategy | Tagged , , , , , , , , , , , | Comments Off on Testing COTS Systems? Make Evaluation Count

Stop Testing – Start Thinking

Throughout my career I have observed numerous organizations all looking for the ‘silver bullet’ to solve all their product quality problems.

News Flash: There is no ‘silver bullet’.  Solving product quality problems can only begin when organizations “Stop Testing and Start Thinking”.

Stop Testing - Start Thinking

Do not get me wrong, testing is an essential part of all product development projects; however, teams that fail to think through their testing needs are destined to fail by delivering ‘buggy’ products that do not meet the needs of the consumer and ultimately have an adverse impact on the organization’s revenue potential.

Teams must know who will do the testing, what testing is required, when to test, where to test (environment) and how to test.

So what is the answer?  Is the solution to blindly mimic what has worked for another organization?

Generally speaking, the answer is not that simple.  In reality, a solution that works for one organization should not be adopted without first understanding more about the people, process and tools ‘recipe’ that was used and how it helped address the organization’s specific product quality problems.

The following areas are where common mistakes are made by many organizations.

Process

Uncertain about the testing methodology to adopt, organizations latch onto the hottest thing trending without understanding what problems need to be addressed and how the choices they’ve made contribute to solving problems.  Perhaps the only thing worse than this is when the team is not aligned on how to address the product verification & validation challenges.

Examples of some common mistakes:

  1. No understanding of how to do testing for Agile projects
  2. Believing TDD (Test Driven Development) solves all testing needs
  3. Unaware of the various types of system testing requirements

Anarchy rules in the absence of a process that is understood and in use by the entire organization.

Tools

Selecting tools before understanding the needs of the team, how these tools will improve the effectiveness of the team or how well they map to the organization’s testing processes.   Tools that do not integrate well with others will adversely impact the team’s ability to quickly assess / address quality problems.

Examples of some common mistakes:

  1. Ineffective tools selection / deployment process contributing to increased costs, project delays and no real return on investment
  2.  Selecting the wrong technology for test automation and / or automating tests too early

The best tools are not always the most expensive tools, but those that satisfy the needs of the cross-functional team.

People

Failing to enable skilled teams by providing them with a process and the tools required for them to be effective.  In addition, failing to invest in the skills development and training of the team-members on an ongoing basis. Ongoing training is important to motivate / retain resources and optimize the effectiveness of the team.

Examples of some common mistakes:

  1. Expecting resources to be highly efficient despite being asked to use tools inappropriate for the job and to follow an ineffective process
  2. No time allocated for professional development, resulting in team members skills becoming outdated and resource retention issues

Rust, rot and erosion will develop where care and maintenance is ignored.

Bottom line is that teams need to “Start Thinking” before attacking any product quality problem.  Time deploying effective solutions to enable your team will significantly improve the success of the organization and reduce the need to “Stop Testing” in the future.

Posted in  All, Planning for Quality | Tagged , , , , , , , | Comments Off on Stop Testing – Start Thinking

Uncovering High Value Defects

Methods of uncovering defects have for the most part stayed the same even with great advancements in process and development tools. One thing that has not stayed the same is the amount of time we have to uncover these defects. With this time constraint how can we uncover the high value defects which could be costly to our organizations? What shift in test technique do we need in order to tackle this time constraint and not fail fast in a horrible way?

A Quality Foundation

In order to detect high value defects we cannot have software which is full of low value trivial defects. When we do not have a quality foundation or reasonable level of quality before testing begins the following occurs:

  • Testers stops testing to log or inform a developer of a trivial defect they have uncovered. (Testers need to be testing to uncover high value defects.)
  • Developers stops developing in order to learn about trivial defects.
  • If a decision to fix this trivial defect goes forward most often times the developer is out of the context of this work. It will take them more time to re-learn or re-gain context in order to apply a fix.
  • Trivial fixes can cause more defects.
  • If you have a quality process in place after this trivial fix is made there is cost associated with it. Continuous Integration systems – build and test jobs along with developer code reviews take time.
  • Finally and most importantly, because you are spending so much time uncovering/fixing trivial issues you can never reach the deeper high value defects.

Building a Quality Foundation

In order to avoid the negative points outlined above we must ensure a baseline of quality is always maintained. Again without this we will be lost uncovering, triaging and fixing low value defects unable to expose the defects which are the most costly. We can build a quality foundation using the following techniques:

Automation Tools (Checks)

  • Automation is a great way to maintain a consistent level of quality throughout the development cycle. Build on this foundation as your developers develop. With new features add more coverage.

Manual Test Review

  • Code reviews are standard practice on most development teams. Taking this concept a step further, why not provide a test review? This can be a small manual test check for a feature before the code is checked in for further in-depth testing. Note: not all development changes require this manual check but if you find you are having a lot of trivial findings you may want to try this on your team.

It’s worth highlighting that automation tools are well suited for creating a quality foundation however, many of the high value defects we wish to flush out will not in my experience be uncovered by automation alone. This is because automation tools check/verify software and do not test software. Testing software requires a human to think, it is not simply checking that the correct screen appears after tapping a button.

Use automation tools for what they do best ensuring a baseline quality foundation continuously at high speed. Don’t expect automation tools to think and therefore have the capability to find high value defects.

Gain Context

Now that our quality foundation is set, what knowledge do we need in order to maximize our ability to uncover high value defects? In order to make our testing more valuable we need to gain context about the software we are going to test. The following activities can help you gain context:

  • Understand the Feature – This seems trivial but have an understanding why a feature is being added to your software. Also understand what type of user will use this feature. This can help you understand how this feature should be properly exposed in your software. High value defects are not always crashes, a poorly implemented feature is also a high value defect/problem. These findings also expose opportunities to make features work in simpler/better ways. It’s worth noting understanding a feature should start as early as possible. Ideally when user stories are being created.
  • Development Tours – When a developer finishes implementing a feature/bug fix, the tester can pair up with them to get a tour of the feature or bug fix. These tours can help testers gain key insights on how a feature was implemented. What problem areas are there and what other areas of the code needed changing to implement the feature.
  • User Feedback – No matter how good you think you have implemented and tested features you won’t get it 100% right. If you have access to user feedback you should make it a habit to check this feedback every day. Gaining a deeper understanding of pain points in your software from a user’s perspective, can help you when testing future features.
  • Production Logs – Similar to user feedback reviewing, crash logs from production can help you understand what areas of your software are buggy. When testing you might take more time in these error prone areas. The entire development team should know about these areas as well. As a tester you should share this information.
  • Competitive Analysis – Understand your competitors strengths and faults. Don’t repeat mistakes they have already made when implementing features.

Pre-test Plan

Ok in no way am I suggesting you drop everything and create a large test plan. My experience tells me this practice in most ways is a waste of time. What I am suggesting is spending 5 minutes figuring out the following:

  • What states can the software be in when interacting with this new feature
  • What inputs can be used to exercise this new feature
  • How usable/accessible is this feature in our software

Think about the testing you will perform. I find diving into testing without first thinking about the testing you will perform can be a bit of a blind strategy. An experienced tester will still find defects without this approach, but for me I find this helps frame my testing.

Testing

Your quality foundation is set, you have gained context around what you will be testing and you have a rough idea how you will approach your testing. You are now ready to test and are in a position to flush out high value defects.

A lot of what is written in this article is already done by great testers in our industry. I wrote this article in an attempt to understand what I do in order to find defects. I believe the exercise of understanding what makes you a great tester is a worthwhile one. So when you have time go through this same exercise and you may just uncover some great ideas around test. Please share these ideas.

Now go uncover high value defects!

Posted in  All, Planning for Quality, Test Planning & Strategy | Tagged , , , , | Comments Off on Uncovering High Value Defects