Testing Matters because Quality Matters

In the course of crafting my contribution for Alexandra McPeak‘s follow-up article for CrossBrowserTesting.com / SmartBear Software‘s #WhyTestingMatters thread, “Expert Insight: Why Testing Matters“, I wrote the following article.  Check out Alex’s first article, “Why Testing Matters“, as well for some current examples of quality challenges in the public eye.

There are so many attributes/factors that contribute to a software system or product being “of quality” that typically you have only the resources to make a few stand out.  Those that are emphasized become competitive differentiators – and part of your brand.

Think of any industry.  What is the one word or phrase that describes each name brand in that market space?  Even if those words/phrases are not directly related to an attribute of quality; the lack of certain aspects of quality, competitively speaking, would not be tolerable for long by the brand’s reputation.

But, each of these companies must constantly make trade-offs and compromises in the fight to grow and maintain their market share.  Faster and cheaper are continually at odds with quality; clamouring for sacrifices and shortcuts.  Competition demands it.

Brands can take decades to work on their images, building up their reputations, and one poor decision that leads to unsatisfied customers and bad publicity can potentially lose it all – at least for a time.

“The bitterness of poor quality remains long after
the sweetness of low price is forgotten.”

So, how can your organization walk the precarious tight-rope of minimizing time-to-market and maximizing profits while delivering products that are still “good enough” for maintaining your image/reputation?


  • Testing can serve as a trusted advisor and integrated investigator of quality within the organization.
  • Testing can strengthen the focus on each prioritized facet of quality across every phase of each project.
  • Testing can evaluate whether the ‘quality bar‘ required for each phase/release has been achieved.
  • Testing can transform collected data into consumable information to help stakeholders make informed business decisions around quality – like when it is reasonable to release, or not.

You wouldn’t want your brand to become infamous for an unfortunate/faulty decision that could have been prevented by leveraging smarter testing, would you?

Testing matters because it provides critical information needed by your organization and your brand to make insightful business decisions related to your software product or system on the road to quality success.

In other words: Testing matters because quality matters.


For related reading:


Posted in  All, Business of Testing, Planning for Quality | Tagged , , , , , , , | Leave a comment

Confidence’s Role in Software Testing

Cofindence's Role in Software Testing

Confidence – “the feeling or belief that one can rely on someone or something; firm trust.” https://en.oxforddictionaries.com/definition/us/confidence

A few weeks ago I sat down to write about verifying bug fixes. I wanted to determine if there was a systematic approach we could follow when performing this activity. When exploring this approach I quickly realized confidence’s crucial role in verifying or signing off on any software we test.

Confidence dictates how much testing we feel we need to execute before we can sign off on anything we test. Our current confidence in our development team directly impacts how much test time we will take in order to feel our software is ready for sign off. The historical quality coming out of the development team dictates this level of confidence.

High Confidence – Just the right amount of testing is executed ensuring software can be signed off. (Note: This does not apply to mission critical software systems.)

Low Confidence – Based on historically bad code quality testers may over test even when code quality is good.

I believe this confidence level is very impactful to the speed in which we develop software. We might hear “QA is a bottleneck” but this is potentially due to historically low quality code causing testers to over test even when good quality code is being verified.

To illustrate this point further see the approach below I came up with to test and ultimately verify bug fixes.

Example: A Mobile App Which Requires Users to Login

Imagine we have a mobile app which requires users to login.

The fictitious bug we will be verifying is the following:

Title: Login Screen – App crashes after tapping login button.


  • App is freshly installed.

Steps to Reproduce:

  1. Launch the app and then proceed to the login screen.
  2. Enter a valid existing email and password.
  3. Tap the “Login” button.


  • App crashes.

Before Verification Begins

Once a bug is marked fixed it’s important we gain more understanding about it before starting to verify its fix. To do this we ask the following questions of the developer who implemented the fix:

  • What was the underlying issue?
  • What caused this issue?
  • How was the issue fixed?
  • What other areas of the software could be impacted with this change?
  • What file was changed?
  • How confident is the developer in the fix? Do they seem certain? Even this can somewhat impact how we test.

* Special Note: Remember we need to gain context from a developer but as a tester you’re not taking direction on exactly what to verify. This is your role as a tester. Of course if a developer suggests testing something in a certain way you can but it’s your role as an experienced tester to use your mind to test a fix.

Now that we have gained a full understanding of how the bug was fixed let us start by verifying at the primary fault point (Exact steps listed in the original bug write up). Below are the high level verification/test ideas starting from very specific checks working outwards like layers of an onion. Notice as we execute more tests and move away from the primary fault point our confidence level in the fix is increasing.

Test Pass 1

  • Exact Software State: Follow exact “Preconditions”. In this case “App is freshly installed”.
  • Exact Input: Following exact steps listed in bugs “Steps to Reproduce”.
  • Verify app no longer crashes.
  • We could stop here but we would not have full confidence that the bug is fully fixed and that we haven’t introduced new knock-on bugs.

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 2

  • Varied State: App is not freshly installed but user is logged out.
  • Exact Input: Following exact steps listed in bugs “Steps to Reproduce”
  • Verify app does not crash

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 3

  • Varying State – After logging out/After restarting app and clearing app data.
  • Varying Input – Missing credentials/Invalid credentials
  • Verify no unexpected behavior

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 4

Test features/functions around login such as:

  • Forgot Password
  • Sign Up

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 5

Moving one final layer away from this fix we enter a phase of testing which includes more outside the box type tests such as: (Note: I love this type of testing as it’s very creative)

  • Interruption testing – Placing app into the background directly after tapping the login button.
  • Network fluctuations – Altering connection while login is taking place.
  • Timing issues – Running around interacting with UI elements at an unnatural speed. Example – Rapidly tapping the login button then back button then login button.

At this point our historic confidence plays a role in whether we continue to test or we feel the bug is fixed. If QA’s confidence is low we could end up spending too much time testing in this final test pass with little to show for our efforts.

How is Confidence Lowered?

  • Initial code quality signed of by development is low. As testers when we begin testing a fix which has been signed off as ready for testing, we will often gauge its quality based on how quickly we discover a bug which will need fixing.
  • Repeated low quality deliveries out of development can make testers correctly over test because it’s necessary. If bugs are found routinely very quickly in software we test naturally we are skittish in signing off future high quality work.

This can lead to over testing even when code quality is delivered in a high quality state. This over testing won’t provide anything of value. Don’t get me wrong you will find bugs but they might end up being more nice to know about then must fix issues. All software releases have bugs. It’s our job to identify high value defects which threaten the quality of our solutions.

How Can We Boost Our Confidence?

I believe we can’t perform “just-right” testing unless our confidence in our development teams is reasonably high. We need to make sure baseline quality is established before any “just-right” manual testing can take place. How do we do this?

  1. Test automation is a perfect mechanism to establish a quality baseline. “Checking” to ensure all basic functions are working as expected.
  2. Shift left into the trench and work with developers as they are implementing a feature so you can ensure initial quality out of development is higher.
  3. Measure your testing efforts to ensure you’re not over testing. Learn to know that sweet spot of just enough testing.
  4. Expose low quality areas – Retrospectives are ideal places to bring up quality issues with the larger team. Let them know you don’t have confidence and need something to change to boost it back up.
  5. Slow down – Oh no we can’t do that right? Yes we can and should slow down if our confidence is low.

If you hear things like “QA is a bottleneck” in your organization you might want to look at the code quality historically coming out of your development team. It’s possible your QA groups are testing endlessly because they lack confidence in the work coming from the development team and they have to test further. It can be difficult for QA to shift or stop testing given the negative track record or low confidence in their teams.

If your code quality is poor, QA’s confidence in Development will be low, and then QA will always be a bottleneck.

Think about it 🙂

Posted in  All, Business of Testing, Planning for Quality | Tagged , , , , , , | Comments Off on Confidence’s Role in Software Testing

Better Test Reporting – Data-Driven Storytelling

Testers have a lot of project “health” data at their finger tips – data collected from others in order to perform testing and data generated from testing itself. And, sometimes test reporting gets stuck on simply communicating this data, these facts. But, if we simply report the facts without an accompanying story to give context and meaning, there is no insight – insight needed to make decisions.

Better Test Reporting - Data to Information to Insight

With all the data we have close to hand, testing is in a great position to integrate data-driven storytelling into the various mediums of our test reporting.

“Stories package information into a structure that is easily remembered which is important in many collaborative scenarios when an analyst is not the same person as the one who makes decisions, or simply needs to share information with peers.” – Jim Stikeleather, The Three Elements of Successful Data Visualizations

“No matter how impressive your analysis is, or how high-quality your data are, you’re not going to compel change unless the stakeholders for your work understand what you have done. That may require a visual story or a narrative one, but it does require a story.” – Tom Davenport, Why Data Storytelling Is So Important—And Why We’re So Bad At It

This enhanced reporting would better support the stakeholders with relevant, curated information that they need to make the decisions necessary for the success of the project, and the business as a whole.

Not Your Typical Test Report…Please!

When thinking of test reporting, perhaps we think of a weekly status report or of a real-time project dashboard?

Often, these types of reporting tend to emphasize tables of numbers and simple charts and rarely contain any contextual story. Eg: time to do the test report: let me run a few queries on the bug database and update a list/table/graph, or two.

We need to thoughtfully consider:

  • What information should our test reporting include?
  • What questions should it really be answering?
  • What message is it supposed to be delivering?

If we answered the following questions with just data, would we gain any real insights?

Question Data Provided
Is testing progressing as expected? # of test cases written
Do we have good quality? # of open bugs
Are we ready for release? # of test cases run

Obviously, these answers are far too limited, and that is the point. Any single fact, or collection of standalone facts, will be typically insufficient to let us reasonably make a decision that has the true success of the project at heart. [Ref: Metrics – Thinking In N-Dimensions]

To find connections and enable insights, first think about what audience(s) we could support with our data in terms of these broad core questions:

  • How are we doing? (Status)
  • What has gone wrong? (Issues)
  • What could go wrong? (Risks)
  • How can we improve?

Then we tailor our data-driven storytelling with a message for each audience to facilitate insight that will be specifically of value to them.

Test Reporting: Data vs. Information

An important distinction to make when thinking about increasing the value of test reporting is the difference between data and information:

  • Data: Data can be defined as a representation of facts, concepts or instructions in a formalized manner which should be suitable for communication, interpretation, or processing by human or electronic machine.
  • Information: Information is organised or classified data which has some meaningful values for the receiver. Information is the processed data on which decisions and actions are based.

Computer – Data and Information, Tutorials Point

Data is not information – yet. Data is the building blocks we construct information from. When we transform data, through analysis and interpretation, into information that we make consumable for the target audience, we are dramatically increasing the usefulness of that data.

For example:

“Here is real-time satellite imagery of cloud cover for our province…”
“Look at all those clouds coming!”
“This is a prediction that our city will get heavy snowfall starting at about 8:30pm tomorrow night…”
“We better go buy groceries and a snow shovel!”

Or in the case of testing:

“Here is a listing of all the bugs found by module with the date found and a link to the associated release notes…”
“That is a lot of bugs!”
“This analysis seems to show that each time Module Y was modified as part of a release the bug count tended to spike…”
“Let’s have someone look into that!”

Through consumable information, we can help provide the opportunity for insights, but information is not insight itself. The audience has to “see” the insight within the information. We can only try to present the information (via whatever mediums) in a way we hope will encourage these realizations, for ourselves and others.

From Data to Decision

Once data is analyzed for trends, for correlations with other data, etc.; plans, choices, and decisions can be made with this information.

The following illustrates the path data takes to informing decisions:

Better Test Reporting - Data Path to Decision-Making

Figure 1: Test Reporting Data Path to Decisions

What data are we collecting and why should be firmly thought out. And then, don’t just report the numbers. Look at each testing activity and see how it can generate information that is useful and practical as input to the decisions that need to be made throughout the project.

  1. Data: <we’ll come back to this>
  2. Consumable Information: Testing takes the collected data and analyzes it for trends, correlations, etc. and reports it in a consumable manner to the target audience(s).
  3. Proposed Options: The data-driven story provided is then used to produce recommendations, options, and/or next steps for consideration by stakeholders.
  4. Discuss & Challenge: The proposed options are circulated to the stakeholders and through review and discussion, plans can be challenged and negotiated.
  5. Feedback Loop: These discussions and challenges will likely lead to questions and the need for clarifications and additional context, which can then send the process back to the datastore.
  6. Decisions Made: Once agreements are reached and the plans have been finalized, decisions have been made.

Of course, testing is not the sole party involved in driving this process. Testing’s specific involvement could stop at any step. However, instead of always stopping at step one with 1-dimensional test reporting, testing could make use of the data collected to move further along the path and to tell a more meaning-filled multi-dimensional story to a more diverse audience of stakeholders, more often.

Better Data – Better Decisions

In this way, the function of test reporting can be helping the project much more than it would when just reporting “there are 7 severe bugs still open”.

This is because our choices typically are not binary. We do not decide:

  • Do we fix all the bugs we find?
  • Do we find bugs or prevent bugs?
  • Do we automate all the testing?
  • Do we write a unit test for everything?

We decide to what degree we will do an activity. We decide how much should we be investing into a given activity or practice or tool.

This is where the first item in the list just above, data, comes in. Data lets us find out what trade-offs with other project investments we will have to make to gain new benefits. Data is the raw material that leads to insight.

So, in order to have “better test reporting” we need to make sure that we know what we need insight about, collect the supporting data accordingly, report the data-driven story, and then follow the path to better decision-making.

Better Data
Better Information
Better Decisions

For related reading, check out these articles:

Posted in  All, Other, Planning for Quality, Test Planning & Strategy | Tagged , , , , , , , , | Comments Off on Better Test Reporting – Data-Driven Storytelling

Augmenting Testing in your Agile Team: A Success Story

One of the facts of life about Agile is that remote resources, when you have a mostly collocated team, generally end up feeling a little left out in the cold.  Yet, with appropriately leveraged tools, sufficient facilitation, management support and strong team buy-in, it can end up being a very successful arrangement.

Augmenting Testing in your Agile Team: A team with remote contributors

Figure 1: A team with remote contributors

There is an implementation model that lends itself more naturally to adding testing resources, or a testing team, to your delivery life cycle.  Rather than embedding your resources, you can find ways to work with the teams in parallel, augmenting their capabilities and efforts in order to achieve greater success.   In this article, we’ll look at a particular case where PQA Testing implemented an augmenting strategy to tackle regression and System Integration Testing (SIT).

Recently we were working with a company that delivers a complex product in retail management to assorted third party vendors.  Features were created, tested and marked ready for release by functionally targeted Agile teams.  Coming out of a sprint wasn’t the last step before a feature was released, however.  Due to the complexity of the product, environments, other systems controlled directly by the third party vendors and other systems controlled indirectly through their third party vendors, System Integration Test (SIT) cycles and User Acceptance Test (UAT) cycles were necessary.

The original intent, when our client went Agile, was to be able to continue to support these relationships through the Agile teams.  What soon became evident was that the amount of regression testing in the SIT environments required for the new features was overwhelming to the testing resources dedicated to a feature team.

Augmenting Testing in your Agile Team: A mixed team with internal and external resources

Figure 2: A mixed team with internal and external resources

Additionally, as multiple environments and numerous stakeholders from various vendors with their own environments were introduced, simple communication, coordination of environments and testing became much more complex and time consuming.  Defects that were found in SIT testing needed to be triaged and coordinated with the other issues created from other vendors, and then tracked as they moved their way through the different teams and vendors to their solution.

As the testing resources on each team focused more on their functional area, their knowledge became more and more specialized and they were no longer the “go-to” resource for questions that might span the entire domain. With this specialization, testers were no longer collecting as much domain knowledge. Additionally, while automation was an integrated part of the company’s solution, test automators were also embedded in the Agile teams.  This changed the focus of automation; it slowly drifted away from providing benefits at the end-to-end integration testing level.

When we began the engagement with this client, they were succeeding from release-to-release, but not at optimum levels of quality, or to vendor satisfaction.   They were borrowing resources from multiple Agile teams and sometimes breaking sprints to ensure that the release could get through the SIT cycle within the specified time frame.  As we do on every PQA Testing engagement, we began by learning the existing process, how the software worked, and about the entire domain.  Before long, we took over regression testing for the releases.  Our focus then became to make sure that the existing functionality remained stable and clean, and that the new features integrated into the system well.

The testing team is now a separate team that is semi-integrated with the existing teams.  We transition knowledge back and forth, but there is a distinction in responsibilities between new features and regression and SIT testing.   As we began taking over these testing responsibilities, we also began to take over communication and facilitation between the core vendor and our client for release and testing.  An automation resource is also able to work through the tests from the big-picture integration perspective, and is reducing the amount of manual testing that is necessary.  Increasing our documented domain knowledge is making it easier to scale the team as necessary during busy times and releases.

Augmenting Testing in your Agile Team: An internal team augmented with a remote team

Figure 3: An internal team augmented with a remote team

Taking over these requirements with a dedicated team has greatly improved the feedback coming from the vendors.  The Agile teams have more focus on their core deliverables.  Integrating remotely with the client’s teams has worked well because we don’t have to constantly interact face-to-face to show value in our work.  We are simply another team trying to move the ball forward for the company, just like everyone else.

Remote testing teams dedicated to ownership of specific testing functions can remove many of the obstacles of testing remotely in an Agile environment and, in this case, better ensure quality for the end user.

Posted in  All, Agile Testing, Business of Testing | Tagged , , , , , | Comments Off on Augmenting Testing in your Agile Team: A Success Story

8 Test Automation Tips for Project Managers

8 Test Automation TipsSoftware testing has always faced large volumes of work and short timeframes. To get the most value for your testing dollars, test automation is typically a critical component. However, many teams have attempted to add test automation to their projects with mixed results.

To help increase the likelihood of success, the approach to automation must be from the practical perspective that automating testing, effectively, is not easy.

Here are 8 test automation tips for project managers.

1. Decide Your Test Automation Objectives Early

Automation is a method of testing, not a type. Therefore automation should be applied to those tests from the overall test plan where there is a clear benefit to do so. Before starting, ensure that the benefits of test automation match with your objectives. For example, do you want to:

  • Discover defects earlier?
  • Increase test availability (rapid and unattended)?
  • Extend test capability and coverage?
  • Free-up manual testers?

2. Carefully Select your Test Automation Tools / Languages

There are many options and possible combinations of tools and scripting languages. Take some time to review the options and find the best fit for your project: confirm the technology fits with your project, look for a skill requirement match with your team, check that you can integrate with your test management and defect tracking tools, etc. Then try before you buy, eg: perform a proof of concept, perhaps using your smoke tests.

3. Control Scope and Manage Expectations

When starting a new test automation effort, there is often the tendency to jump in and immediately start automating test cases. To avoid this pitfall, it is important to treat the automation effort as a real project in and of itself.

  • Derive requirements from the objectives
  • Ensure the scope is achievable
  • Define an implementation plan (linked to milestones of the actual project)
  • Secure resources and infrastructure
  • Track it

Not only will this help ensure the success of the effort, but it will allow you to communicate with other stakeholders what will be automated, how long it will take, and the short and long-term benefits that are expected.

4. Use an Agile Approach

Following an Agile approach, you can roll-out your test automation rapidly in useful pieces; making progress visible and benefits accessible as early as possible. This will give you the ability to validate your approaches while demonstrating the value of the test automation in a tight feedback cycle.

5. Scripts are Software

You are writing code. The same good practices that you follow on the actual project should be followed here: coding standards, version control, modular data-driven architecture, error handling and recovery, etc. And, like any other code, it needs to be reviewed and tested.

6. Use Well Designed Test Cases and Test Data

Garbage in, garbage out. Make sure you have a set of test cases that have been carefully selected to best address your objectives. It is important to design these test cases using reusable modules or building-blocks that can be leveraged across the various scenarios. Additionally, these test cases should be documented in a standardized way to make them easier to add to the automated test suite. This is especially important if you envision using non-technical testers or business users to add tests to the repository, using a keyword driven or similar approach to your automation.

7. Get the Test Results

Providing test results and defect reports quickly is the most important reason for test automation. Each time you need to run the automated tests, you are reaping the benefits that automation provides. For example, running the test automation in its own environment as part of the continuous integration process will detect any issues related to the automated test cases for the application under test as soon as features and fixes are checked in.

8. Maintain and Enhance

Investing in automation requires a significant commitment in the short-term and the long-term for there to be maximum success. For as long as the product that is being automated is maintained and enhanced, the automation suite should be similarly maintained and enhanced. If the test automation solution is well-designed and kept up-to-date with a set of useful tests, it will provide value for years.

Posted in  All, Automation & Tools, Planning for Quality | Tagged , , , , , , , , , | Comments Off on 8 Test Automation Tips for Project Managers

Software Testing Guiding Principles

All effective test teams typically have well defined processes, appropriate tools and resources with a variety of skills. However, teams cannot be successful if they place 100% dependency on the documented processes, as doing so leads to conflicts. Especially when testers use these processes as ‘shields’ or ‘crutches’.

Software Testing Guiding PrinciplesTo be successful, test teams need to leverage their processes as tools towards becoming “IT” teams. And by “IT” I do not mean Internet Technology.

IT (Intelligent Testing) teams apply guiding
principles to ensure that the most cost effective
test solution is provided at all times

This posting provides a look into the “guiding principles” I’ve found useful at helping testers I’ve worked with to become highly effective and valued as part of a product development organization.

Attitude is Everything

The success you experience as a tester depends 100% on your attitude.

A non-collaborative attitude will lead to
conflict, limit the success of the test team and
ultimately undermine the success of the
entire organization.

Testers must:

  • Learn to recognize challenges being faced by the team and to work collaboratively to solve problems
  • As stated by Steve Covey – “Think Win-Win
  • Lead by example and inspire others. A collaborative attitude will pay dividends and improve the working relationship for the entire organization, especially when the team is stressed and under pressure.

Quality is Job # 1

This one borrowed from Ford Motor Company.

Testing, also known as Quality Control, exists to implement an organizations Quality Assurance Program. As such, testers are seen as the “last line of defense” and play a vital role in the success of the business.

Poor quality leads to unhappy customers and eventually the loss of those customers, which then adversely impacts business revenue.

Testers are ultimately focused on ensuring the
positive experience of the customer using the
product or service.

Communication is King

Testers should strive to be superior communicators, as ineffective communications leads to confusion and reflects poorly on the entire team.

The test team will be judged by the quality of their work, which comes in the form of:

  • Test Plans
  • Test Cases
  • Defect Reports
  • Status Reports
  • Emails
  • Presentations

Learn how to communicate clearly, concisely
and completely.

Know Your Customer

Like it, or not, testing is ‘service based’ and delivers services related to the organizations Quality Assurance Program. For example: test planning, preparation and execution services on behalf of an R&D team (i.e. internal customer).

Understanding the needs and priorities of the
internal customer will help to ensure a positive
and successful test engagement.

Test Engineering also represents the external customer (i.e. user of the product / service being developed). Understanding the external customer will help to improve the quality of the testing and, ultimately, quality of the product.

Without understanding the external customer
it is not possible to effectively plan and implement
a cost effective testing program.

Ambiguity is Our Enemy

This basically means “Never Assume” and clarify whenever there is uncertainty.

Making assumptions about how a products features / functionality, schedules, etc function will lead to a variety of issues:

  • Missed expectations
  • Test escapes – Customer Reported Defects
  • Reflect poorly on the professionalism of the Test Engineering team

Testers must avoid ambiguity in the documentation that they create so as to not confuse others.

Data! Data! Data!

Test teams ‘live and breath’ data. They consume data and they create data.

Data provided from other teams is used to make intelligent decisions:

  • Requirements
  • Specifications
  • Schemas
  • Schedules
  • Etc

Data generated by the test program is used to assist with making decisions on the quality of the product:

  • Requirements coverage
  • Testing progress
  • Defect status
  • Defect arrival / closure rates

The fidelity and timeliness of the data collected
is critical to the success of the entire

Trust Facts – Question Assumptions

Related to principle having to do with avoiding ambiguity, test teams must never make assumptions. As doing so can have a significant impact on the entire business.

Testers must:

  • Work with the cross-functional team to address issues with requirements, user stories, etc
  • Clarify schedules / expectations when in doubt
  • Leverage test documentation (e.g. Test Plan) to articulate and set expectations with respect to the test program
  • Track / manage outstanding issues until they are resolved

Be as ‘surgical’ as necessary to ensure quality
issues are not propagated to later phases of
the product life-cycle

Think Innovation

Regardless of the role you play, every member of the test team can make a difference.

  • Improvement ideas should be socialized, shared and investigated
  • Small changes can make a huge difference to the team and the organization

Innovation that can benefit the Test or Quality Assurance Program are always welcome.

  • Tweaks to processes, templates, workflows
  • Enhancements to tools
  • Advancements in automation techniques, tools, etc

Remember, the team is always looking for ways to increase effectiveness and make the most out of the limited Test Engineering budget

Strive to be “Solution Oriented”

Process for Structure – Not Restrictions

Some will say “What do you mean process do not restrict”. On the surface it may appear as if process does in fact restrict the team; however, if you dig deeper you will discover that documented processes help by:

  • Improving communications through establishing consistency between deliverables and interactions between teams
  • Making it clear to all ‘stakeholders’ what to expect at any given point of time in the product life-cycle
  • Providing tools that can be used to train new members of the team

Documented process are not intended to limit
creativity. If the process is not working –
Change the Process

  • Augment existing templates if it will enhance the value of the testing program; however, be sure to follow appropriate Change Management processes when introducing an update that may impact large numbers of people.
  • Document and obtain approvals for deviations/exceptions if the value of completing certain aspects of the process has been assessed as non-essential for a program / project.

Plan Wisely

A well thought out and documented plan is worth its weight in gold. The documented plan is the primary tool used to set expectations by all the stakeholders.

“If you fail to plan you plan to fail”

Plan as if the money you are spending is your own. There is a limited budget for testing and it is your responsibility to ensure the effectiveness of the Test Program such that is provides the highest ROI (Return on Investment).

Identify Priorities

Make “First Things First” (Steven Covey)

Unless you are absolutely clear on the the priorities it will not be possible to effectively plan and / or execute a successful Test Program.

It is not possible for an individual, or team, to have two number one priorities.  Although it is possible to make progress on multiple initiatives it is not possible for an individual to complete multiple initiatives at the exact same time. Schedules, milestones, capacity plans, etc should all reflect the priorities.

Always ensure priorities are in alignment with
the expectations of all stakeholders

At the end of the day the most important Software Test Principle is “If you do not know – ASK”. Testers are expected to ask questions until they are confident that they have the information needed to effectively plan, prepare and execute an effective Test Program.

Just remember, unanswered questions contribute to ambiguity and add risk to the business.

Posted in  All, Business of Testing, Planning for Quality | Tagged , , , , , , | Comments Off on Software Testing Guiding Principles

Testing COTS Systems? Make Evaluation Count

Over the years, I have been involved in a number of projects testing COTS (Commercial-Off-The-Shelf) systems across a range of industries. Sometimes the project was with the vendor and sometimes with the customer. When it came to supporting a company undertaking a COTS system implementation, I always appreciated the benefits that came with a “quality” evaluation.

When such an evaluation is conducted in a thoughtful manner, a lot of ramp-up, preparation, AND testing can be shifted to the left (Ref: New Project? Shift-Left to Start Right!) making the overall selection process that much more likely to find the “best-fit” COTS system.

Implementing COTS Systems Costly; Mitigate Your Risks

COTS systems are a common consideration for most enterprise organizations when planning their IT strategy around ERP, CMS, CRM, HRIS, BI, etc. Rarely will an organization build such a substantial software system from scratch if there is a viable alternative.

However, unlike software products that we can just install and start using right out-of-the-box, these COTS systems must typically undergo configuration, customization and/or extension before they will meet the full business needs of the end-user. This can get expensive.

As such, implementation necessarily requires a strong business case to justify the level of investment involved. Anything that impairs the selection and implementation of the best-fit COTS system will put that business case at risk.

Earlier involvement of testing can be key to mitigating risk to the business case with respect to the following challenges.

A COTS System is a Very Dark “Black Box”

Having to treat an application as complex as the typical COTS system like a black box is a significant challenge.

When we conduct black box testing for a system that we have built in-house, we have requirements, insights to the architecture and design, and access to the developers’ knowledge of their code. We can get input as to what are the risky areas, and where there is tighter coupling or business logic complexity. We can even ask for testability improvements.

When we are testing COTS systems, we don’t have any of that. The only requirements are the user manuals, the insights come from tidbits gleaned from the vendor and their trainers, and we don’t have access to the developers or even experienced users. It is a much darker black box that conceals significant risk.

Testing COTS Systems - A Black Box in the Application EcosystemFig 1: Testing COTS Systems – A Black Box in the Application Ecosystem

Additionally, not all the testing can be done by manually poking around in the GUI. Testing COTS systems involves a great amount of testing how the COTS system communicates with other systems and data sources via its interfaces.

Also, consider the data required. As Virginia Reynolds comments in Managing COTS Test Efforts, In Three Parts, when testing COTS systems “it’s all-data, all the time.” In addition to using data as part of functional and non-functional testing, specific testing of data migration, flow, integrity, and security is critical.

Leaving the majority of testing such a system until late in the implementation process and, possibly, primarily as part of user acceptance by business users, will be very risky to the organization.

Claims Should Be Verified

When we create a piece of software in-house or even if we contract another party to write it for us, we control the code. We can change it, update it, and contract a different 3rd party to extend it if and when we feel like it. With COTS systems, the vendor owns the code and they are always actively working on it. They are continually upgrading and enhancing the software.

As we know from our own testing efforts, there isn’t time to test everything, or to fix everything. That means, the vendor will have made choices and trade-offs with respect to the features and the quality of the system they are selling to us, and all their customers.

Of course, it is reasonable to expect that the vendor will test their core functionality, or the “vanilla” configuration of their system. They would not remain in business long if they did not. But, to depend on the assumption that what the vendor considers to be “quality” is the same as what we consider to be “quality”, is asking for trouble.

“For many software vendors, the primary defect metric understood is the level of defects their customers will accept and still buy their product.” Randall Rice, Testing COTS-Based Applications

Even if we trust the vendor and their claims, remember they are not testing in our specific context, eg: meeting our functional and quality requirements when the COTS system is configured to our specific business processes and integrated with our application ecosystem. (Ref: To Test or Not to Test?)

Vanilla is Not the Flavour of Your Business

The vendor of the COTS system is not making their product for us, at least not just for us. They are making their system for the market/industry that our business is a part of.

As each customer has their own specific way of doing business, it is very unlikely that we would take a COTS system and implement it straight out-of-the-box in its “vanilla” configuration. And though we may be “in the industry” that the COTS system is intended to provide a solution for, there will always need to be some tweaking and some gluing.

The COTS system will need to be configured, customized and/or extended before it is ready to be used by the business. And, because of the lack of insight and experience with the system, the impact of any such changes will not be well understood – a risk to implementation.

COTS Systems Must “Play Nice”

Testing COTS systems comes in two major pieces; testing the configured COTS system itself, and testing the COTS system together with its upstream and downstream applications.

Many of the business’ work processes will span multiple applications and we need to look for overall system level incompatibilities and competing demands on system resources. Issues related to reliability, performance, and security can often go unnoticed until the overall system is integrated together.

And when there is an issue, it can be very difficult to isolate the source of the error if the problem is resulting from the interaction of two of more applications. The difficulty in isolating any issues is further complicated when the applications involved are COTS systems (black boxes) from different vendors.

“Finding the actual source of the failure – or even re-creating the failure – can be quite complex and time-consuming, especially when the COTS system involves products from multiple vendors.” – Richard Bechtold, Efficient and Effective Testing of Multiple COTS-Intensive Systems

We need to have familiarity with the base COTS system in order to be able to isolate these sorts of integration issues more effectively, and especially to be able to confidently identify where the responsibility lies.

Testing COTS Systems during Evaluation

If there has been an honest effort to “do it right”, then a formal selection process will take place prior to implementation, one that goes beyond reading the different vendors’ websites and sales brochures. And in this case, testing can be involved earlier in the process.

Consider the three big blocks of a COTS deployment: Selection, Implementation, and Maintenance. The implementation phase is traditionally where all the action is, especially from the testing point of view.

But, we don’t want to be struggling in implementation with issues related to the challenges described above. We need to explore the COTS system’s functionality and its limits in the aspects of quality that are important to us before that point. Why find out about usability, performance, security model, and data model issues after selection? After all, moving release dates is usually quite costly.

“The quality of the software that is delivered for a COTS product depends on the supplier’s view of quality. For many vendors, the competition for rushing a new version to market is more important than delivering a high level of software reliability, usability, and other qualities.” – Judith A. Clapp, Audrey E. Taub, A Management Guide to Software Maintenance in COTS-Based Systems

If we get testing started early, we can be ramping up on this large, complex software system, reviewing requirements, documenting our important test cases, finding bugs and other issues, determining test environment and data needs, and identifying upstream and downstream application dependencies all before the big decision is made. Thereby, informing that decision while responsibly preparing for the inevitable implementation.

To realize these and other benefits, we can leverage testing and shift efforts to the left, away from the final deadline. We make testing an integral part of decision-making during evaluation.

Testing COTS Systems - Major Deployment StagesFig 2: Testing COTS Systems – Major Deployment Stages

We want to choose the right solution the first time with no big surprises after making that choice. This early involvement of testing, done efficiently, can help our implementation go that much more smoothly.

Multiple Streams of Evaluation Testing

When designing a new software system, there are many considerations around what it needs to do and what are the important quality characteristics. This is no different with a COTS system, except that it is already built. That functionality and those quality characteristics are already embedded in the system.

It would be great if there was a system that perfectly fit our needs right out-of-the-box, functionally and quality-wise. But that won’t be the case. The software was not built for us. There will be things about it that fit and don’t fit, things that we like and don’t like, and things that will be missing. This applies to our fit with the vendor as well.

Our evaluation must take the list of candidates that passed the non-technical screening and rapidly get to the point where we can say: “Yes, this is the best choice for us. This is the one we want to put effort into making work.”

In order to do that, we will need to:

  • Confirm vendor claims in terms of functionality, interfaces for up/down stream applications and DW/BI systems, configurability, compatibility, reporting, etc
  • Confirm suitability of the data model, the security model, and data security
  • Confirm compatibility with the overall system environment and dependent applications
  • Investigate the limits of quality in terms of the quality characteristics that are key to our business and users (eg: reliability, usability, performance, etc.)
  • Uncover bugs, undocumented features, and others issues in areas of the system that are business critical, popular/frequently used, and/or have complex/involved business processes

The evaluation will also need to include more than just the COTS system. The vendor should be evaluated on such things as organizational maturity, financial stability, customer service/support, quality of training/documentation, etc.

To do all of this efficiently, we can organize our evaluation testing into four streams of activity that we can execute in parallel, giving us a COTS selection process that can be illustrated at the high-level as follows:

Testing COTS Systems - Evaluation Testing in ParallelFig 3: Testing COTS Systems – Evaluation Testing in Parallel

As adapted from Timing the Testing of COTS Software Products, the streams of evaluation testing would focus on the following:

  • Functional Testing: the COTS systems are tested in isolation to learn and confirm the functional capabilities being provided by each candidate
  • Interoperability Testing: the COTS systems are tested to determine which candidate will best be able to co-exist in the overall application ecosystem
  • Non-Functional Testing: the COTS systems are tested to provide a quantitative assessment of the degree to which each candidate meets our requirements around the aspects of quality that are important to us
  • Management Evaluation: the COTS systems are evaluated on their less tangible aspects including such things as training, costs, vendor capability, etc.

Caveat: We don’t want to test each system to the same extent. We want to eliminate candidate COTS systems as rapidly as possible.

Rapidly Narrowing the Field

In order to eliminate candidate COTS systems as rapidly and efficiently as possible, we need a progressive filtering approach to applying the selection criteria. This approach will also ensure that the effort put into evaluating the candidate COTS systems is minimized overall.

Additionally, the requirements gathering and detailing can be conducted in a just-in-time (JIT) manner over the course of the entire selection phase rather than as a big bang effort at the beginning of implementation.

As an example, we could organize this progressive filtering approach into three phases or levels:

Testing COTS Systems - Progressively Filtering CandidatesFig 4: Testing COTS Systems – Progressively Filtering Candidates

Testing would scale up over the course of the three phases of evaluation, increasing in coverage, complexity, and formality as the number of systems being evaluated reduces.

The best-fit COTS system will be more confidently identified, and a number of important benefits generated, in the course of this process.

Testing with Benefits

With our efficient approach to involving testing during evaluation, we will not only be able to rapidly select the best option for the specific context of our company, but we will also be able to leverage the following additional benefits from our investment, as we move forward into implementation:

  • Requirements Captured: Requirements have been captured from the business and architecture, reviewed, and tested against
  • Stronger Fit-Gap Analysis: Missing functionality has been identified for inputting to implementation planning
  • Test Team Trained: The test team is trained up on the chosen COTS system and has practical experience testing it
  • Quality Baseline Established: Base aspects of the COTS system have already been tested, establishing a quality baseline
  • Development Prototypes Tested: Prototypes of “glue” code to interact with the interfaces and/or simulate other applications and ETL scripts for data migration have been developed, and have been tested
  • Test Artifacts Created: Reusable test artifacts, including test data, automated test drivers, and automated data loaders are retained for implementation testing
  • Test Infrastructure Identified: Needs around tools, infrastructure and data for testing have been enumerated for inputting to implementation planning
  • Bug Fixing: Bugs, undocumented features, and other issues related to the COTS system have been found and raised to the vendor prior to signing on the dotted line


In addition to uncovering issues early, involving testing during evaluation will establish a baseline of expected functional capability and overall quality before any customization and integration. This will be of great help when trying to isolate issues that come up in implementation.

“Vendors are much more likely to address customer concerns with missing or incomplete functionality as well as bugs in the software before they sign on the dotted line.” – Arlene Minkiewicz, 6 Steps to a Successful COTS Implementation

Most important of all, after this testing during evaluation, the implementation project can more reasonably be considered an enhancement of an existing system that we are now already familiar with. Therefore, we can more confidently focus our testing during implementation on where changes are made when configuring, customizing, extending, and integrating the COTS system, mitigating the risks associated specifically with those changes, while having confidence that the larger system has already been evaluated from a quality point of view.

With less surprises and problems during implementation, we should end up having to do less testing overall.

“The success of the entire development depends on an accurate understanding of the capabilities and limitations of the individual COTS. This dependency can be quantified by implementing a test suite that uncovers interoperability problems, as well as highlighting individual characteristics. These tests represent a formal evaluation of the COTS capabilities and, when considered within the overall system context can represent a major portion of subsystem testing.” – John C. Dean, Timing the Testing of COTS Software Products

With an approach such as this, we should be able to reduce candidate COTS system options faster, achieve a closer match to our needs, know earlier about fit-gaps and risks, capture our requirements more timely and completely, and spread out the demands on testing resources and environments – all of which should help us achieve a faster deployment and a more successful project.

Choose your COTS system wisely and you’ll save time and money… Make your evaluation count.

Posted in  All, Planning for Quality, Risk & Testing, Test Planning & Strategy | Tagged , , , , , , , , , , , | Comments Off on Testing COTS Systems? Make Evaluation Count

Stop Testing – Start Thinking

Throughout my career I have observed numerous organizations all looking for the ‘silver bullet’ to solve all their product quality problems.

News Flash: There is no ‘silver bullet’.  Solving product quality problems can only begin when organizations “Stop Testing and Start Thinking”.

Stop Testing - Start Thinking

Do not get me wrong, testing is an essential part of all product development projects; however, teams that fail to think through their testing needs are destined to fail by delivering ‘buggy’ products that do not meet the needs of the consumer and ultimately have an adverse impact on the organization’s revenue potential.

Teams must know who will do the testing, what testing is required, when to test, where to test (environment) and how to test.

So what is the answer?  Is the solution to blindly mimic what has worked for another organization?

Generally speaking, the answer is not that simple.  In reality, a solution that works for one organization should not be adopted without first understanding more about the people, process and tools ‘recipe’ that was used and how it helped address the organization’s specific product quality problems.

The following areas are where common mistakes are made by many organizations.


Uncertain about the testing methodology to adopt, organizations latch onto the hottest thing trending without understanding what problems need to be addressed and how the choices they’ve made contribute to solving problems.  Perhaps the only thing worse than this is when the team is not aligned on how to address the product verification & validation challenges.

Examples of some common mistakes:

  1. No understanding of how to do testing for Agile projects
  2. Believing TDD (Test Driven Development) solves all testing needs
  3. Unaware of the various types of system testing requirements

Anarchy rules in the absence of a process that is understood and in use by the entire organization.


Selecting tools before understanding the needs of the team, how these tools will improve the effectiveness of the team or how well they map to the organization’s testing processes.   Tools that do not integrate well with others will adversely impact the team’s ability to quickly assess / address quality problems.

Examples of some common mistakes:

  1. Ineffective tools selection / deployment process contributing to increased costs, project delays and no real return on investment
  2.  Selecting the wrong technology for test automation and / or automating tests too early

The best tools are not always the most expensive tools, but those that satisfy the needs of the cross-functional team.


Failing to enable skilled teams by providing them with a process and the tools required for them to be effective.  In addition, failing to invest in the skills development and training of the team-members on an ongoing basis. Ongoing training is important to motivate / retain resources and optimize the effectiveness of the team.

Examples of some common mistakes:

  1. Expecting resources to be highly efficient despite being asked to use tools inappropriate for the job and to follow an ineffective process
  2. No time allocated for professional development, resulting in team members skills becoming outdated and resource retention issues

Rust, rot and erosion will develop where care and maintenance is ignored.

Bottom line is that teams need to “Start Thinking” before attacking any product quality problem.  Time deploying effective solutions to enable your team will significantly improve the success of the organization and reduce the need to “Stop Testing” in the future.

Posted in  All, Planning for Quality | Tagged , , , , , , , | Comments Off on Stop Testing – Start Thinking

Uncovering High Value Defects

Methods of uncovering defects have for the most part stayed the same even with great advancements in process and development tools. One thing that has not stayed the same is the amount of time we have to uncover these defects. With this time constraint how can we uncover the high value defects which could be costly to our organizations? What shift in test technique do we need in order to tackle this time constraint and not fail fast in a horrible way?

A Quality Foundation

In order to detect high value defects we cannot have software which is full of low value trivial defects. When we do not have a quality foundation or reasonable level of quality before testing begins the following occurs:

  • Testers stops testing to log or inform a developer of a trivial defect they have uncovered. (Testers need to be testing to uncover high value defects.)
  • Developers stops developing in order to learn about trivial defects.
  • If a decision to fix this trivial defect goes forward most often times the developer is out of the context of this work. It will take them more time to re-learn or re-gain context in order to apply a fix.
  • Trivial fixes can cause more defects.
  • If you have a quality process in place after this trivial fix is made there is cost associated with it. Continuous Integration systems – build and test jobs along with developer code reviews take time.
  • Finally and most importantly, because you are spending so much time uncovering/fixing trivial issues you can never reach the deeper high value defects.

Building a Quality Foundation

In order to avoid the negative points outlined above we must ensure a baseline of quality is always maintained. Again without this we will be lost uncovering, triaging and fixing low value defects unable to expose the defects which are the most costly. We can build a quality foundation using the following techniques:

Automation Tools (Checks)

  • Automation is a great way to maintain a consistent level of quality throughout the development cycle. Build on this foundation as your developers develop. With new features add more coverage.

Manual Test Review

  • Code reviews are standard practice on most development teams. Taking this concept a step further, why not provide a test review? This can be a small manual test check for a feature before the code is checked in for further in-depth testing. Note: not all development changes require this manual check but if you find you are having a lot of trivial findings you may want to try this on your team.

It’s worth highlighting that automation tools are well suited for creating a quality foundation however, many of the high value defects we wish to flush out will not in my experience be uncovered by automation alone. This is because automation tools check/verify software and do not test software. Testing software requires a human to think, it is not simply checking that the correct screen appears after tapping a button.

Use automation tools for what they do best ensuring a baseline quality foundation continuously at high speed. Don’t expect automation tools to think and therefore have the capability to find high value defects.

Gain Context

Now that our quality foundation is set, what knowledge do we need in order to maximize our ability to uncover high value defects? In order to make our testing more valuable we need to gain context about the software we are going to test. The following activities can help you gain context:

  • Understand the Feature – This seems trivial but have an understanding why a feature is being added to your software. Also understand what type of user will use this feature. This can help you understand how this feature should be properly exposed in your software. High value defects are not always crashes, a poorly implemented feature is also a high value defect/problem. These findings also expose opportunities to make features work in simpler/better ways. It’s worth noting understanding a feature should start as early as possible. Ideally when user stories are being created.
  • Development Tours – When a developer finishes implementing a feature/bug fix, the tester can pair up with them to get a tour of the feature or bug fix. These tours can help testers gain key insights on how a feature was implemented. What problem areas are there and what other areas of the code needed changing to implement the feature.
  • User Feedback – No matter how good you think you have implemented and tested features you won’t get it 100% right. If you have access to user feedback you should make it a habit to check this feedback every day. Gaining a deeper understanding of pain points in your software from a user’s perspective, can help you when testing future features.
  • Production Logs – Similar to user feedback reviewing, crash logs from production can help you understand what areas of your software are buggy. When testing you might take more time in these error prone areas. The entire development team should know about these areas as well. As a tester you should share this information.
  • Competitive Analysis – Understand your competitors strengths and faults. Don’t repeat mistakes they have already made when implementing features.

Pre-test Plan

Ok in no way am I suggesting you drop everything and create a large test plan. My experience tells me this practice in most ways is a waste of time. What I am suggesting is spending 5 minutes figuring out the following:

  • What states can the software be in when interacting with this new feature
  • What inputs can be used to exercise this new feature
  • How usable/accessible is this feature in our software

Think about the testing you will perform. I find diving into testing without first thinking about the testing you will perform can be a bit of a blind strategy. An experienced tester will still find defects without this approach, but for me I find this helps frame my testing.


Your quality foundation is set, you have gained context around what you will be testing and you have a rough idea how you will approach your testing. You are now ready to test and are in a position to flush out high value defects.

A lot of what is written in this article is already done by great testers in our industry. I wrote this article in an attempt to understand what I do in order to find defects. I believe the exercise of understanding what makes you a great tester is a worthwhile one. So when you have time go through this same exercise and you may just uncover some great ideas around test. Please share these ideas.

Now go uncover high value defects!

Posted in  All, Planning for Quality, Test Planning & Strategy | Tagged , , , , | Comments Off on Uncovering High Value Defects

Maximizing the Value of Test Automation

High quality software delivered to market quickly has always been the goal of Agile teams. A common process teams use to achieve this goal is test automation. However, simply implementing test automation doesn’t always result in reaching this goal. Over the past year, the Android development team at Move Inc. has refined their test automation to deliver a high quality realtor.com app delivered at high speed. Through this process we have identified four key areas we needed to address: reliability, ownership (Who owns the test automation?), priority (How is test automation work prioritized?) and execution point (At what point in your process are tests being run?). By addressing these four areas we were able to unlock and maximize the full value of our test automation.


test automation - reliability

Reliability in test automation is important to accurately and consistently measure the quality of software. If a test passes the first time it’s run but fails the second time when the app being tested has not changed, how will we interpret these results? Many factors can get in the way of reliability including synchronization issues, reliable test fixtures (data) and even some overlap into the ownership arena.

Synchronization issues occur when the speed at which your software runs is not always consistent. As a result, when a test attempts to perform a UI action, such as a button tap, the app may not have finished rendering yet. If your tests rely on live data sets this can also create problems with reliability as this data might not always be easy to retrieve from a large backend system. Finally, while not directly tied to reliability, ownership does factor into maintenance and upkeep of tests.

Our team has worked to address various reliability issues. First, we switched our test framework from Calabash to Espresso because Espresso has built in handling for synchronization issues. Tests can only continue when the app is in a state in which it can successfully proceed. We found handling synchronization issues using Calabash possible but ultimately resulted in increased test time by forcing increased wait times in tests.  Without these long waits, we could not guarantee the tests would not fail unexpectedly. This resulted in an increased wait time in excess of two hours to run through approximately 110 tests.

Espresso out of the box will pause your test execution if the UI thread is busy and then proceed immediately when the app is ready.  Espresso also allows you to directly launch into a specific screen (Activity) under test. This results in significantly reduced test time as not all tests need to traverse multiple screens before performing a test. The same tests which took two or more hours to run now execute in around 20 minutes. Espresso has enabled us to spend more time implementing new tests and less time dealing with synchronization issues. We also moved our test automation project directly into the app project. This allows our tests to directly reference resources in the app. Tests no longer break when a developer refactors a UI resource because both the app and the test are updated. It should be noted that the Espresso framework can only be used for testing native Android apps.

Finally, the last way we combat intermittent failures is by measuring when tests are reliable. We no longer add tests directly into our primary test suite before they prove themselves; our team created a Test Warden service that is responsible for tracking the health of all our tests.  We got this idea after seeing the 2014 google test conference presentation by Roy Williams “Never Send a Human to do a Machine’s Job – How Facebook uses bots to manage tests”. Each time a test is executed it reports whether it passed or failed. Only after passing 50 consecutive times can we trust it enough to accurately measure the quality of the software under test and then be moved into the primary test suite.  Consider it a probationary period for new tests.

The second area our team needed to address was accessing test data quickly. At Move Inc. we have access to tons of test data in the form of homes (listings). We prefer to use real data because it flushes out potential issues in our app and also underlying API layers; the problem with using real test data is how it would be accessed. Initially, we used SQL queries but these queries were taking a very long time to retrieve the data and sometimes no test data was found.  In order to fix this issue, the team created a dedicated test service called Graffiti. Passing tags (keywords), such as “for_sale + has_photos”, returns a test listing which is both for sale and has photos. This service is lightning fast at retrieving test data and helped immensely with increasing test speed.


test automation - ownership

Ownership of test automation is also very important. Who will be responsible for implementing, maintaining, and reporting issues automation finds? Initially the QA team, including myself, owned this process from creating, running, maintaining, and reporting test results.  A number of issues arose when QA was the primary owner of automation. The first problem with this approach was knowledge sharing. The developers were not involved at all with test automation and thus had no idea what had coverage and how the tests worked. This made it extremely challenging for them to fix broken tests or interpret results. Another problem was reporting and visibility. QA would be the only ones to bring failed tests to the developers attention. This creates an unnecessary bottleneck in the flow of information.

Ownership is now shared between the QA and developers on our team. In this new partnership both groups benefit. QA gets access to developers to improve the way our automation framework is coded. After all, test automation is essentially a development effort. Developers now get insight in how QA tests specific features in our app and both groups now have an overall better view of what and how tests work. Any new features in our app by definition need automation around them to be considered done and developers are now jointly responsible for this effort. Any features which are already existing are the QA’s responsibility to implement. QA is also responsible for the reporting and general health. Maintenance of existing known good tests is now the responsibility of the developer who broke the test or experienced a test problem. This makes sense as our tests now are reliable and any failures directly identify problems a developer has introduced.


test automation - priority

Next up, we have priority of the automation effort. Is your team’s automation top priority or does it run on the side?  We can’t expect automation to bring full value without prioritizing this effort.  Automation on our team originally was run on the side. New automation work would often be de-prioritized in favour of new features in our app. The QA group would try their best to maintain existing tests as well as create new ones, keeping in mind there was still manual test work to be done. It seems strange that we had automation but because it wasn’t properly prioritized it didn’t bring the full value it could. Our test automation is now high priority and any new feature must have automation around it to be considered done. We leverage our top developers when we need to tackle difficult test framework issues and our test code now lives inside our app project. Finally, we learned to run this effort like a full blown product. We have a separate specific backlog which prioritizes automation work. If your team is not prioritizing your automation effort, then I wonder, how much value you are getting out of this process?

Execution Point

test automation - execution point

Finally, we have come to what I believe was the most impactful change we made to our automated process, the execution point. Originally for us we triggered our tests after a merge occurred. If you think about this execution point, it really is not the most valuable point to run automation. If code is merged into a branch before the quality of that code is verified then you are not allowing your existing test automation to bring full value. We test throughout the sprint and automation is being leveraged as a gating factor in our development process. The developers on our team create a Github pull request which contains a small feature in isolation. As soon as a pull request enters our system, our automated build and test jobs are executed. If the smoke tests were to fail, the developer would not be able to merge their work into the base branch. While this is logical, at first we found ourselves not following this process. This process was not picked up until it was enforced.

It is important to highlight all the great things that occurred after we changed our execution point:

  • Developers had to fix broken tests to get their code merged.
  • Potentially bad code was not allowed to enter branch.
  • Increased communication between team members.
  • Found bugs early. Tests can’t improve quality when they are executed downstream! It’s too late!

If you have test automation in place and want to unleash it’s full potential, try revisiting some of the key areas we did. With these changes in place, our team is now able to focus on adding more tests and ultimately increasing quality and speed to market.

Posted in  All, Agile Testing, Automation & Tools, Planning for Quality | Tagged , , , , , , | Comments Off on Maximizing the Value of Test Automation