Testing COTS Systems? Make Evaluation Count

Over the years, I have been involved in a number of projects testing COTS (Commercial-Off-The-Shelf) systems across a range of industries. Sometimes the project was with the vendor and sometimes with the customer. When it came to supporting a company undertaking a COTS system implementation, I always appreciated the benefits that came with a “quality” evaluation.

When such an evaluation is conducted in a thoughtful manner, a lot of ramp-up, preparation, AND testing can be shifted to the left (Ref: New Project? Shift-Left to Start Right!) making the overall selection process that much more likely to find the “best-fit” COTS system.

Implementing COTS Systems Costly; Mitigate Your Risks

COTS systems are a common consideration for most enterprise organizations when planning their IT strategy around ERP, CMS, CRM, HRIS, BI, etc. Rarely will an organization build such a substantial software system from scratch if there is a viable alternative.

However, unlike software products that we can just install and start using right out-of-the-box, these COTS systems must typically undergo configuration, customization and/or extension before they will meet the full business needs of the end-user. This can get expensive.

As such, implementation necessarily requires a strong business case to justify the level of investment involved. Anything that impairs the selection and implementation of the best-fit COTS system will put that business case at risk.

Earlier involvement of testing can be key to mitigating risk to the business case with respect to the following challenges.

A COTS System is a Very Dark “Black Box”

Having to treat an application as complex as the typical COTS system like a black box is a significant challenge.

When we conduct black box testing for a system that we have built in-house, we have requirements, insights to the architecture and design, and access to the developers’ knowledge of their code. We can get input as to what are the risky areas, and where there is tighter coupling or business logic complexity. We can even ask for testability improvements.

When we are testing COTS systems, we don’t have any of that. The only requirements are the user manuals, the insights come from tidbits gleaned from the vendor and their trainers, and we don’t have access to the developers or even experienced users. It is a much darker black box that conceals significant risk.

Testing COTS Systems - A Black Box in the Application EcosystemFig 1: Testing COTS Systems – A Black Box in the Application Ecosystem

Additionally, not all the testing can be done by manually poking around in the GUI. Testing COTS systems involves a great amount of testing how the COTS system communicates with other systems and data sources via its interfaces.

Also, consider the data required. As Virginia Reynolds comments in Managing COTS Test Efforts, In Three Parts, when testing COTS systems “it’s all-data, all the time.” In addition to using data as part of functional and non-functional testing, specific testing of data migration, flow, integrity, and security is critical.

Leaving the majority of testing such a system until late in the implementation process and, possibly, primarily as part of user acceptance by business users, will be very risky to the organization.

Claims Should Be Verified

When we create a piece of software in-house or even if we contract another party to write it for us, we control the code. We can change it, update it, and contract a different 3rd party to extend it if and when we feel like it. With COTS systems, the vendor owns the code and they are always actively working on it. They are continually upgrading and enhancing the software.

As we know from our own testing efforts, there isn’t time to test everything, or to fix everything. That means, the vendor will have made choices and trade-offs with respect to the features and the quality of the system they are selling to us, and all their customers.

Of course, it is reasonable to expect that the vendor will test their core functionality, or the “vanilla” configuration of their system. They would not remain in business long if they did not. But, to depend on the assumption that what the vendor considers to be “quality” is the same as what we consider to be “quality”, is asking for trouble.

“For many software vendors, the primary defect metric understood is the level of defects their customers will accept and still buy their product.” Randall Rice, Testing COTS-Based Applications

Even if we trust the vendor and their claims, remember they are not testing in our specific context, eg: meeting our functional and quality requirements when the COTS system is configured to our specific business processes and integrated with our application ecosystem. (Ref: To Test or Not to Test?)

Vanilla is Not the Flavour of Your Business

The vendor of the COTS system is not making their product for us, at least not just for us. They are making their system for the market/industry that our business is a part of.

As each customer has their own specific way of doing business, it is very unlikely that we would take a COTS system and implement it straight out-of-the-box in its “vanilla” configuration. And though we may be “in the industry” that the COTS system is intended to provide a solution for, there will always need to be some tweaking and some gluing.

The COTS system will need to be configured, customized and/or extended before it is ready to be used by the business. And, because of the lack of insight and experience with the system, the impact of any such changes will not be well understood – a risk to implementation.

COTS Systems Must “Play Nice”

Testing COTS systems comes in two major pieces; testing the configured COTS system itself, and testing the COTS system together with its upstream and downstream applications.

Many of the business’ work processes will span multiple applications and we need to look for overall system level incompatibilities and competing demands on system resources. Issues related to reliability, performance, and security can often go unnoticed until the overall system is integrated together.

And when there is an issue, it can be very difficult to isolate the source of the error if the problem is resulting from the interaction of two of more applications. The difficulty in isolating any issues is further complicated when the applications involved are COTS systems (black boxes) from different vendors.

“Finding the actual source of the failure – or even re-creating the failure – can be quite complex and time-consuming, especially when the COTS system involves products from multiple vendors.” – Richard Bechtold, Efficient and Effective Testing of Multiple COTS-Intensive Systems

We need to have familiarity with the base COTS system in order to be able to isolate these sorts of integration issues more effectively, and especially to be able to confidently identify where the responsibility lies.

Testing COTS Systems during Evaluation

If there has been an honest effort to “do it right”, then a formal selection process will take place prior to implementation, one that goes beyond reading the different vendors’ websites and sales brochures. And in this case, testing can be involved earlier in the process.

Consider the three big blocks of a COTS deployment: Selection, Implementation, and Maintenance. The implementation phase is traditionally where all the action is, especially from the testing point of view.

But, we don’t want to be struggling in implementation with issues related to the challenges described above. We need to explore the COTS system’s functionality and its limits in the aspects of quality that are important to us before that point. Why find out about usability, performance, security model, and data model issues after selection? After all, moving release dates is usually quite costly.

“The quality of the software that is delivered for a COTS product depends on the supplier’s view of quality. For many vendors, the competition for rushing a new version to market is more important than delivering a high level of software reliability, usability, and other qualities.” – Judith A. Clapp, Audrey E. Taub, A Management Guide to Software Maintenance in COTS-Based Systems

If we get testing started early, we can be ramping up on this large, complex software system, reviewing requirements, documenting our important test cases, finding bugs and other issues, determining test environment and data needs, and identifying upstream and downstream application dependencies all before the big decision is made. Thereby, informing that decision while responsibly preparing for the inevitable implementation.

To realize these and other benefits, we can leverage testing and shift efforts to the left, away from the final deadline. We make testing an integral part of decision-making during evaluation.

Testing COTS Systems - Major Deployment StagesFig 2: Testing COTS Systems – Major Deployment Stages

We want to choose the right solution the first time with no big surprises after making that choice. This early involvement of testing, done efficiently, can help our implementation go that much more smoothly.

Multiple Streams of Evaluation Testing

When designing a new software system, there are many considerations around what it needs to do and what are the important quality characteristics. This is no different with a COTS system, except that it is already built. That functionality and those quality characteristics are already embedded in the system.

It would be great if there was a system that perfectly fit our needs right out-of-the-box, functionally and quality-wise. But that won’t be the case. The software was not built for us. There will be things about it that fit and don’t fit, things that we like and don’t like, and things that will be missing. This applies to our fit with the vendor as well.

Our evaluation must take the list of candidates that passed the non-technical screening and rapidly get to the point where we can say: “Yes, this is the best choice for us. This is the one we want to put effort into making work.”

In order to do that, we will need to:

  • Confirm vendor claims in terms of functionality, interfaces for up/down stream applications and DW/BI systems, configurability, compatibility, reporting, etc
  • Confirm suitability of the data model, the security model, and data security
  • Confirm compatibility with the overall system environment and dependent applications
  • Investigate the limits of quality in terms of the quality characteristics that are key to our business and users (eg: reliability, usability, performance, etc.)
  • Uncover bugs, undocumented features, and others issues in areas of the system that are business critical, popular/frequently used, and/or have complex/involved business processes

The evaluation will also need to include more than just the COTS system. The vendor should be evaluated on such things as organizational maturity, financial stability, customer service/support, quality of training/documentation, etc.

To do all of this efficiently, we can organize our evaluation testing into four streams of activity that we can execute in parallel, giving us a COTS selection process that can be illustrated at the high-level as follows:

Testing COTS Systems - Evaluation Testing in ParallelFig 3: Testing COTS Systems – Evaluation Testing in Parallel

As adapted from Timing the Testing of COTS Software Products, the streams of evaluation testing would focus on the following:

  • Functional Testing: the COTS systems are tested in isolation to learn and confirm the functional capabilities being provided by each candidate
  • Interoperability Testing: the COTS systems are tested to determine which candidate will best be able to co-exist in the overall application ecosystem
  • Non-Functional Testing: the COTS systems are tested to provide a quantitative assessment of the degree to which each candidate meets our requirements around the aspects of quality that are important to us
  • Management Evaluation: the COTS systems are evaluated on their less tangible aspects including such things as training, costs, vendor capability, etc.

Caveat: We don’t want to test each system to the same extent. We want to eliminate candidate COTS systems as rapidly as possible.

Rapidly Narrowing the Field

In order to eliminate candidate COTS systems as rapidly and efficiently as possible, we need a progressive filtering approach to applying the selection criteria. This approach will also ensure that the effort put into evaluating the candidate COTS systems is minimized overall.

Additionally, the requirements gathering and detailing can be conducted in a just-in-time (JIT) manner over the course of the entire selection phase rather than as a big bang effort at the beginning of implementation.

As an example, we could organize this progressive filtering approach into three phases or levels:

Testing COTS Systems - Progressively Filtering CandidatesFig 4: Testing COTS Systems – Progressively Filtering Candidates

Testing would scale up over the course of the three phases of evaluation, increasing in coverage, complexity, and formality as the number of systems being evaluated reduces.

The best-fit COTS system will be more confidently identified, and a number of important benefits generated, in the course of this process.

Testing with Benefits

With our efficient approach to involving testing during evaluation, we will not only be able to rapidly select the best option for the specific context of our company, but we will also be able to leverage the following additional benefits from our investment, as we move forward into implementation:

  • Requirements Captured: Requirements have been captured from the business and architecture, reviewed, and tested against
  • Stronger Fit-Gap Analysis: Missing functionality has been identified for inputting to implementation planning
  • Test Team Trained: The test team is trained up on the chosen COTS system and has practical experience testing it
  • Quality Baseline Established: Base aspects of the COTS system have already been tested, establishing a quality baseline
  • Development Prototypes Tested: Prototypes of “glue” code to interact with the interfaces and/or simulate other applications and ETL scripts for data migration have been developed, and have been tested
  • Test Artifacts Created: Reusable test artifacts, including test data, automated test drivers, and automated data loaders are retained for implementation testing
  • Test Infrastructure Identified: Needs around tools, infrastructure and data for testing have been enumerated for inputting to implementation planning
  • Bug Fixing: Bugs, undocumented features, and other issues related to the COTS system have been found and raised to the vendor prior to signing on the dotted line

Conclusion

In addition to uncovering issues early, involving testing during evaluation will establish a baseline of expected functional capability and overall quality before any customization and integration. This will be of great help when trying to isolate issues that come up in implementation.

“Vendors are much more likely to address customer concerns with missing or incomplete functionality as well as bugs in the software before they sign on the dotted line.” – Arlene Minkiewicz, 6 Steps to a Successful COTS Implementation

Most important of all, after this testing during evaluation, the implementation project can more reasonably be considered an enhancement of an existing system that we are now already familiar with. Therefore, we can more confidently focus our testing during implementation on where changes are made when configuring, customizing, extending, and integrating the COTS system, mitigating the risks associated specifically with those changes, while having confidence that the larger system has already been evaluated from a quality point of view.

With less surprises and problems during implementation, we should end up having to do less testing overall.

“The success of the entire development depends on an accurate understanding of the capabilities and limitations of the individual COTS. This dependency can be quantified by implementing a test suite that uncovers interoperability problems, as well as highlighting individual characteristics. These tests represent a formal evaluation of the COTS capabilities and, when considered within the overall system context can represent a major portion of subsystem testing.” – John C. Dean, Timing the Testing of COTS Software Products

With an approach such as this, we should be able to reduce candidate COTS system options faster, achieve a closer match to our needs, know earlier about fit-gaps and risks, capture our requirements more timely and completely, and spread out the demands on testing resources and environments – all of which should help us achieve a faster deployment and a more successful project.

Choose your COTS system wisely and you’ll save time and money… Make your evaluation count.

Posted in  All, Planning for Quality, Risk & Testing, Test Planning & Strategy | Tagged , , , , , , , , , , , | Leave a comment

Stop Testing – Start Thinking

Throughout my career I have observed numerous organizations all looking for the ‘silver bullet’ to solve all their product quality problems.

News Flash: There is no ‘silver bullet’.  Solving product quality problems can only begin when organizations “Stop Testing and Start Thinking”.

Stop Testing - Start Thinking

Do not get me wrong, testing is an essential part of all product development projects; however, teams that fail to think through their testing needs are destined to fail by delivering ‘buggy’ products that do not meet the needs of the consumer and ultimately have an adverse impact on the organization’s revenue potential.

Teams must know who will do the testing, what testing is required, when to test, where to test (environment) and how to test.

So what is the answer?  Is the solution to blindly mimic what has worked for another organization?

Generally speaking, the answer is not that simple.  In reality, a solution that works for one organization should not be adopted without first understanding more about the people, process and tools ‘recipe’ that was used and how it helped address the organization’s specific product quality problems.

The following areas are where common mistakes are made by many organizations.

Process

Uncertain about the testing methodology to adopt, organizations latch onto the hottest thing trending without understanding what problems need to be addressed and how the choices they’ve made contribute to solving problems.  Perhaps the only thing worse than this is when the team is not aligned on how to address the product verification & validation challenges.

Examples of some common mistakes:

  1. No understanding of how to do testing for Agile projects
  2. Believing TDD (Test Driven Development) solves all testing needs
  3. Unaware of the various types of system testing requirements

Anarchy rules in the absence of a process that is understood and in use by the entire organization.

Tools

Selecting tools before understanding the needs of the team, how these tools will improve the effectiveness of the team or how well they map to the organization’s testing processes.   Tools that do not integrate well with others will adversely impact the team’s ability to quickly assess / address quality problems.

Examples of some common mistakes:

  1. Ineffective tools selection / deployment process contributing to increased costs, project delays and no real return on investment
  2.  Selecting the wrong technology for test automation and / or automating tests too early

The best tools are not always the most expensive tools, but those that satisfy the needs of the cross-functional team.

People

Failing to enable skilled teams by providing them with a process and the tools required for them to be effective.  In addition, failing to invest in the skills development and training of the team-members on an ongoing basis. Ongoing training is important to motivate / retain resources and optimize the effectiveness of the team.

Examples of some common mistakes:

  1. Expecting resources to be highly efficient despite being asked to use tools inappropriate for the job and to follow an ineffective process
  2. No time allocated for professional development, resulting in team members skills becoming outdated and resource retention issues

Rust, rot and erosion will develop where care and maintenance is ignored.

Bottom line is that teams need to “Start Thinking” before attacking any product quality problem.  Time deploying effective solutions to enable your team will significantly improve the success of the organization and reduce the need to “Stop Testing” in the future.

Posted in  All, Planning for Quality | Tagged , , , , , , , | Leave a comment

Uncovering High Value Defects

Methods of uncovering defects have for the most part stayed the same even with great advancements in process and development tools. One thing that has not stayed the same is the amount of time we have to uncover these defects. With this time constraint how can we uncover the high value defects which could be costly to our organizations? What shift in test technique do we need in order to tackle this time constraint and not fail fast in a horrible way?

A Quality Foundation

In order to detect high value defects we cannot have software which is full of low value trivial defects. When we do not have a quality foundation or reasonable level of quality before testing begins the following occurs:

  • Testers stops testing to log or inform a developer of a trivial defect they have uncovered. (Testers need to be testing to uncover high value defects.)
  • Developers stops developing in order to learn about trivial defects.
  • If a decision to fix this trivial defect goes forward most often times the developer is out of the context of this work. It will take them more time to re-learn or re-gain context in order to apply a fix.
  • Trivial fixes can cause more defects.
  • If you have a quality process in place after this trivial fix is made there is cost associated with it. Continuous Integration systems – build and test jobs along with developer code reviews take time.
  • Finally and most importantly, because you are spending so much time uncovering/fixing trivial issues you can never reach the deeper high value defects.

Building a Quality Foundation

In order to avoid the negative points outlined above we must ensure a baseline of quality is always maintained. Again without this we will be lost uncovering, triaging and fixing low value defects unable to expose the defects which are the most costly. We can build a quality foundation using the following techniques:

Automation Tools (Checks)

  • Automation is a great way to maintain a consistent level of quality throughout the development cycle. Build on this foundation as your developers develop. With new features add more coverage.

Manual Test Review

  • Code reviews are standard practice on most development teams. Taking this concept a step further, why not provide a test review? This can be a small manual test check for a feature before the code is checked in for further in-depth testing. Note: not all development changes require this manual check but if you find you are having a lot of trivial findings you may want to try this on your team.

It’s worth highlighting that automation tools are well suited for creating a quality foundation however, many of the high value defects we wish to flush out will not in my experience be uncovered by automation alone. This is because automation tools check/verify software and do not test software. Testing software requires a human to think, it is not simply checking that the correct screen appears after tapping a button.

Use automation tools for what they do best ensuring a baseline quality foundation continuously at high speed. Don’t expect automation tools to think and therefore have the capability to find high value defects.

Gain Context

Now that our quality foundation is set, what knowledge do we need in order to maximize our ability to uncover high value defects? In order to make our testing more valuable we need to gain context about the software we are going to test. The following activities can help you gain context:

  • Understand the Feature – This seems trivial but have an understanding why a feature is being added to your software. Also understand what type of user will use this feature. This can help you understand how this feature should be properly exposed in your software. High value defects are not always crashes, a poorly implemented feature is also a high value defect/problem. These findings also expose opportunities to make features work in simpler/better ways. It’s worth noting understanding a feature should start as early as possible. Ideally when user stories are being created.
  • Development Tours – When a developer finishes implementing a feature/bug fix, the tester can pair up with them to get a tour of the feature or bug fix. These tours can help testers gain key insights on how a feature was implemented. What problem areas are there and what other areas of the code needed changing to implement the feature.
  • User Feedback – No matter how good you think you have implemented and tested features you won’t get it 100% right. If you have access to user feedback you should make it a habit to check this feedback every day. Gaining a deeper understanding of pain points in your software from a user’s perspective, can help you when testing future features.
  • Production Logs – Similar to user feedback reviewing, crash logs from production can help you understand what areas of your software are buggy. When testing you might take more time in these error prone areas. The entire development team should know about these areas as well. As a tester you should share this information.
  • Competitive Analysis – Understand your competitors strengths and faults. Don’t repeat mistakes they have already made when implementing features.

Pre-test Plan

Ok in no way am I suggesting you drop everything and create a large test plan. My experience tells me this practice in most ways is a waste of time. What I am suggesting is spending 5 minutes figuring out the following:

  • What states can the software be in when interacting with this new feature
  • What inputs can be used to exercise this new feature
  • How usable/accessible is this feature in our software

Think about the testing you will perform. I find diving into testing without first thinking about the testing you will perform can be a bit of a blind strategy. An experienced tester will still find defects without this approach, but for me I find this helps frame my testing.

Testing

Your quality foundation is set, you have gained context around what you will be testing and you have a rough idea how you will approach your testing. You are now ready to test and are in a position to flush out high value defects.

A lot of what is written in this article is already done by great testers in our industry. I wrote this article in an attempt to understand what I do in order to find defects. I believe the exercise of understanding what makes you a great tester is a worthwhile one. So when you have time go through this same exercise and you may just uncover some great ideas around test. Please share these ideas.

Now go uncover high value defects!

Posted in  All, Planning for Quality, Test Planning & Strategy | Tagged , , , , | Leave a comment

Maximizing the Value of Test Automation

High quality software delivered to market quickly has always been the goal of Agile teams. A common process teams use to achieve this goal is test automation. However, simply implementing test automation doesn’t always result in reaching this goal. Over the past year, the Android development team at Move Inc. has refined their test automation to deliver a high quality realtor.com app delivered at high speed. Through this process we have identified four key areas we needed to address: reliability, ownership (Who owns the test automation?), priority (How is test automation work prioritized?) and execution point (At what point in your process are tests being run?). By addressing these four areas we were able to unlock and maximize the full value of our test automation.

Reliability

test automation - reliability

Reliability in test automation is important to accurately and consistently measure the quality of software. If a test passes the first time it’s run but fails the second time when the app being tested has not changed, how will we interpret these results? Many factors can get in the way of reliability including synchronization issues, reliable test fixtures (data) and even some overlap into the ownership arena.

Synchronization issues occur when the speed at which your software runs is not always consistent. As a result, when a test attempts to perform a UI action, such as a button tap, the app may not have finished rendering yet. If your tests rely on live data sets this can also create problems with reliability as this data might not always be easy to retrieve from a large backend system. Finally, while not directly tied to reliability, ownership does factor into maintenance and upkeep of tests.

Our team has worked to address various reliability issues. First, we switched our test framework from Calabash to Espresso because Espresso has built in handling for synchronization issues. Tests can only continue when the app is in a state in which it can successfully proceed. We found handling synchronization issues using Calabash possible but ultimately resulted in increased test time by forcing increased wait times in tests.  Without these long waits, we could not guarantee the tests would not fail unexpectedly. This resulted in an increased wait time in excess of two hours to run through approximately 110 tests.

Espresso out of the box will pause your test execution if the UI thread is busy and then proceed immediately when the app is ready.  Espresso also allows you to directly launch into a specific screen (Activity) under test. This results in significantly reduced test time as not all tests need to traverse multiple screens before performing a test. The same tests which took two or more hours to run now execute in around 20 minutes. Espresso has enabled us to spend more time implementing new tests and less time dealing with synchronization issues. We also moved our test automation project directly into the app project. This allows our tests to directly reference resources in the app. Tests no longer break when a developer refactors a UI resource because both the app and the test are updated. It should be noted that the Espresso framework can only be used for testing native Android apps.

Finally, the last way we combat intermittent failures is by measuring when tests are reliable. We no longer add tests directly into our primary test suite before they prove themselves; our team created a Test Warden service that is responsible for tracking the health of all our tests.  We got this idea after seeing the 2014 google test conference presentation by Roy Williams “Never Send a Human to do a Machine’s Job – How Facebook uses bots to manage tests”. Each time a test is executed it reports whether it passed or failed. Only after passing 50 consecutive times can we trust it enough to accurately measure the quality of the software under test and then be moved into the primary test suite.  Consider it a probationary period for new tests.

The second area our team needed to address was accessing test data quickly. At Move Inc. we have access to tons of test data in the form of homes (listings). We prefer to use real data because it flushes out potential issues in our app and also underlying API layers; the problem with using real test data is how it would be accessed. Initially, we used SQL queries but these queries were taking a very long time to retrieve the data and sometimes no test data was found.  In order to fix this issue, the team created a dedicated test service called Graffiti. Passing tags (keywords), such as “for_sale + has_photos”, returns a test listing which is both for sale and has photos. This service is lightning fast at retrieving test data and helped immensely with increasing test speed.

Ownership

test automation - ownership

Ownership of test automation is also very important. Who will be responsible for implementing, maintaining, and reporting issues automation finds? Initially the QA team, including myself, owned this process from creating, running, maintaining, and reporting test results.  A number of issues arose when QA was the primary owner of automation. The first problem with this approach was knowledge sharing. The developers were not involved at all with test automation and thus had no idea what had coverage and how the tests worked. This made it extremely challenging for them to fix broken tests or interpret results. Another problem was reporting and visibility. QA would be the only ones to bring failed tests to the developers attention. This creates an unnecessary bottleneck in the flow of information.

Ownership is now shared between the QA and developers on our team. In this new partnership both groups benefit. QA gets access to developers to improve the way our automation framework is coded. After all, test automation is essentially a development effort. Developers now get insight in how QA tests specific features in our app and both groups now have an overall better view of what and how tests work. Any new features in our app by definition need automation around them to be considered done and developers are now jointly responsible for this effort. Any features which are already existing are the QA’s responsibility to implement. QA is also responsible for the reporting and general health. Maintenance of existing known good tests is now the responsibility of the developer who broke the test or experienced a test problem. This makes sense as our tests now are reliable and any failures directly identify problems a developer has introduced.

Priority

test automation - priority

Next up, we have priority of the automation effort. Is your team’s automation top priority or does it run on the side?  We can’t expect automation to bring full value without prioritizing this effort.  Automation on our team originally was run on the side. New automation work would often be de-prioritized in favour of new features in our app. The QA group would try their best to maintain existing tests as well as create new ones, keeping in mind there was still manual test work to be done. It seems strange that we had automation but because it wasn’t properly prioritized it didn’t bring the full value it could. Our test automation is now high priority and any new feature must have automation around it to be considered done. We leverage our top developers when we need to tackle difficult test framework issues and our test code now lives inside our app project. Finally, we learned to run this effort like a full blown product. We have a separate specific backlog which prioritizes automation work. If your team is not prioritizing your automation effort, then I wonder, how much value you are getting out of this process?

Execution Point

test automation - execution point

Finally, we have come to what I believe was the most impactful change we made to our automated process, the execution point. Originally for us we triggered our tests after a merge occurred. If you think about this execution point, it really is not the most valuable point to run automation. If code is merged into a branch before the quality of that code is verified then you are not allowing your existing test automation to bring full value. We test throughout the sprint and automation is being leveraged as a gating factor in our development process. The developers on our team create a Github pull request which contains a small feature in isolation. As soon as a pull request enters our system, our automated build and test jobs are executed. If the smoke tests were to fail, the developer would not be able to merge their work into the base branch. While this is logical, at first we found ourselves not following this process. This process was not picked up until it was enforced.

It is important to highlight all the great things that occurred after we changed our execution point:

  • Developers had to fix broken tests to get their code merged.
  • Potentially bad code was not allowed to enter branch.
  • Increased communication between team members.
  • Found bugs early. Tests can’t improve quality when they are executed downstream! It’s too late!

If you have test automation in place and want to unleash it’s full potential, try revisiting some of the key areas we did. With these changes in place, our team is now able to focus on adding more tests and ultimately increasing quality and speed to market.

Posted in  All, Agile Testing, Automation & Tools, Planning for Quality | Tagged , , , , , , | Leave a comment

Testing as a Service? Testing is a Service

We, as testers, do not “build in quality” and can not guarantee all the defects will be found.  We can not, should not, define “what is quality?”  And, we should not be making the choice to release or not.

Not on our own.

Testing is always part of a larger team, striving for the collective goal of success.

Testing is a ServiceFig 1: Testing is a Service

Within that team, testing provides service as a trusted advisor and integrated investigator regarding the desired and current state of the project deliverables.  Testing checks implemented vs. intended behaviours, conducts experiments to explore quality limits vs. agreed quality criteria, collects resulting data, and provides valuable insights for the benefit of the project’s stakeholders – Insights that will guide the critical decisions that determine project success.

Consider the following as testing’s mission statement:

“The mission of the project’s test team is to undertake such test-related activities within the constraints provided to them by the organization to facilitate maximum mitigation of the likelihood of a failure of the software system from occurring after deployment that would negatively impact the business.”

Or more simply:

“Through our thoughtful testing, we collaborate with the project team to reduce the likelihood of a serious failure in the field.”

In any case, we can expect that testing will be limited in terms of budget, schedule, people, tools, infrastructure, etc.  It is therefore vital that testing is able to prioritize what is critical, and what is not, so that these scarce resources can be applied in the most valuable, risk-driven, manner possible.

Within these constraints and following those priorities, the testing group will undertake testing activities with the objective of facilitating the removal of as many issues as possible that may compromise the organization’s agreed quality criteria and prevent acceptance of the system for release.

Additionally, the testing effort must be structured such that relevant information and insights pertaining to the degree to which the current state of the system is converging to the desired state can be communicated to project management and other stakeholders throughout each phase of the project, to enable informed decision-making.

Testing is a service provided within the project for the benefit of its stakeholders.  Testing produces data, data is interpreted into information, information gives insight, and insight guides decisions.  How thoughtfully the testing service is performed and the quality of its insights will be key in determining the success of the project.

 

For related reading, check out these articles:

Posted in  All, Business of Testing, Planning for Quality, Risk & Testing, Team Building, Test Planning & Strategy | Tagged , , , , , | Leave a comment

New Project? Shift-Left to Start Right!

Project management is continually challenged with delivering the “right” solution, on-time and on-budget, with high quality.

In the face of typical project constraints and changing priorities, quality is sometimes sacrificed towards the end of the project for the sake of releasing on-time or to give the impression of conserving schedule and/or costs.

But more frequently, and often unintentionally, quality is sacrificed at the beginning of the project by not planning how to integrate testing throughout. This leads to the project realizing downstream costs that are far greater than necessary.

How can we avoid this?

“Shift-Left Testing”

The best time to catch bugs in the product is to do so before the code is even written. This leads to a staple recommendation when consulting with companies on how to improve their quality practices: involve testing earlier in your development processes.

“By combining development and quality assurance earlier and more deeply in your project plan, you can expand your testing program and reduce manpower and equipment needs.” – Larry Smith, Shift-Left Testing, 2001.

Successful project managers know you cannot leave testing as an afterthought when planning, or simply allocate a couple weeks for testing at the end of the project. That is a sure way to get an unhappy customer.

“Shift-Left” is an action-oriented watchword used in the software industry to remind us to involve testing in our upstream activities.  Some typical examples include:

  • Testing the requirements for “what-if’s”
  • Testing the design for usability
  • Designing and automating test cases in advance of the code
  • Embedding automated testing into the continuous integration process
  • Scaling up security and performance testing as integration progresses

As each project will likely pass through a set of common phases multiple times, we have the repeated opportunity to remove bugs early and more cheaply through these sorts of Verification and Validation (V&V) activities.

“Development, test and operations work together to plan, manage and execute automated and continuous testing to accelerate feedback to developers” – Paul Bahrs, Shift Left – Approach and Practices with IBM, 2014.

By requiring V&V participation from testing and downstream stakeholders during the current focus of activity, questions and issues can be raised and resolved quickly and relatively cheaply compared to if they were caught later in the project, or after release.

Shift-Left Again!

Requirements testing and other formal/informal static testing activities are great for eliminating errors and ambiguities, thereby reducing related problems later in the project.  And, starting dynamic testing earlier reduces the traditional pressures at the end of the project. But, testing can go even further to aid project managers in running a successful project by getting actively involved in the actual project planning activities.

Shift-Left: Common Test Activities for Each ReleaseFig 1: Common Test Activities for Each Release

Testing can aid project management in addressing complexity and dependency risks, scheduling resources so as to minimize the critical path, identifying and capitalizing on “dead time” in the project schedule, and defining testability requirements for under-the-hood functionality.

Furthermore, testing can present options for the test strategy based on quality criteria, risks, and project constraints such that stakeholders can make quality choices while being conscious of the trade-offs or opportunity costs.

3rd Time’s the Charm: If we “Shift-Left” one more time, we will find ourselves at the end of the previous release cycle. By considering what information would be useful or needed for the next release, testing can alter how it undertakes its activities – making contributing to the next release cycle that much more impactful.

Shift-Left: Inputs for Planning the Next ReleaseFig 2: Inputs for Planning the Next Release

And now, we are involving testing throughout each release and the overall software development lifecycle.

Manage Projects –with– Testing

It is rarely easy to complete your project on time, on budget, and with high quality.  You can increase your likelihood of success by managing your project with testing.

“Testing helps managers manage their products…how much is actually there, how well is it working, what course corrections are needed and how much it will cost if they are not made.” Cem Kaner, Testing is Critical in Development Process, 2013.

For each project, you must find the optimal mix of investment in upfront activities versus potential costs of inefficiency or failures later on. And, starting testing activities early is an effective way to catch small quality problems before they become big, more expensive, quality problems.

Aspire to a partnership relationship with testing as your trusted advisor.

The resulting collaboration will allow you to proactively manage the project, balance business needs and project constraints with quality goals, and leverage a powerful monitoring/reporting mechanism for informed decision making.

Delivering more truly successful projects to your customers.

Posted in  All, Agile Testing, Business of Testing, Planning for Quality, Risk & Testing, Test Planning & Strategy | Tagged , , , , | Leave a comment

By Executive Order – Get The Best Test Automation Solution

Test automation is an investment.

Like any investment, you expect a future return.

As stated in “Test Automation – Building Your Business Case” the following are some of the benefits typically expected from investing in software test automation:

  • Discover defects earlier
  • Increase test speed, accuracy, repeatability
  • Increase test availability (rapid and unattended)
  • Extend test capability and coverage
  • Increase test accumulation
  • Increase tester effectiveness
  • Formalize testing and enable measurement

In turn, these benefits lead to returns around improved productivity, reduced costs, improved product quality and, ultimately, increased customer satisfaction.

As a decision-maker, you “get” this Return on Investment (ROI) for test automation.

But, there are many competing choices where to invest your money.  You have many demands on your time, people and budget.  And, every department or group has their own ideas for how help make the company more successful.

You know, responsibly, you can’t simply say “Let’s automate!”

Top Priority!

Although it might seem to be a “no-brainer”, or even a requirement, that you should have some level of test automation as part of your development effort, you know investing in automation requires a significant commitment in the short term and the long term for there to be success.  There is no “silver bullet”.

“Automated testing requires a higher initial investment but can yield a higher ROI” When to Automate Your Testing (and When Not To), Joe Fernandes, Oracle

Test automation, therefore, should be treated just as seriously as any comparable investment in the company.

How do you make sure your next investment in test automation is making your solution the best it can be?  How do you know it is the best place to invest your scarce resources?

Help your team help you make the decision.

What are Your Options?

Sure it would be nice if all the tests could run auto-magically on-demand by waving a wand.  But, every real solution requires effort and infrastructure, typically at the expense of some other possibility.  The development of a sound, thorough business case covering each option is essential for objective comparison.

Requiring a business case helps set expectations about what a proposal for test automation next steps should cover.  And, by making visible what the evaluation criteria are, it can communicate what values the company is prioritizing at this time.

Completing a business case also forces thought to go beyond the technical part of the solution to also encompass the business aspects and cost justification.

“Executives want their direct reports to do a better job at presenting proposals…They simply send in a list of what they want without presenting any real justification or argument for it.”Selling Your Proposal to Senior Executives, W. Palmer

A key part of the business case is defining the opportunity. This is where the team must communicate that they understand the scope and implications of the opportunity and that they are qualified and capable to propose the solution.

Another important component of the business case is the call to action.  As the decision-maker, you need to know what decision you are being asked to make.

As adapted from “How to Structure Your Business Proposal Presentations” look for an overview that gets right to the point by:

  • Defining the vision, goals and objectives
  • Describing how the current state is not aligned with the vision, goals and objectives
  • Listing possible solutions with pros and cons for each (including any major / significant risks, challenges and obstacles)
  • Identifying the best solution, highlighting its key benefits over the alternatives
  • Calling for action and / or agreement to the recommended next steps

The overview should be followed by appendices so as to keep the core message clear and concise.  When you are at the point of detailed review, the appendices will be ready to provide:

  • A detailed implementation plan of the recommended option along with resourcing requirements, milestones, approach to minimizing / removing the highlighted risks, challenges and obstacles and any impact on organizational structure
  • Detailed information / background / data on all the options and their calculations
  • Results from any R&D or pilots conducted
  • Relevant case studies and industry benchmarks
  • Applicable studies and references

Caveat – Case Studies and Benchmarks: The more similar the organization studied is to your organization, the more likely you are to experience similar results.

Make the Call

When will your investment start paying dividends?  When will the first benefits be felt?  When will the initiative start paying for itself?

Core to the business case is the ability to compare the relative ROI of the proposed options and identify the best solution.

For ease of comparison, have the options summarized in one or more tables that compare their key aspects.  For example, the following table provides one such comparison on the money side of things:

Test Automation Cost-Benefits Summary TableIn addition to meeting your base criteria, expect that the best solution will be a substantial step forward that provides tangible / measurable benefits in a relatively short period of time, while also:

  • Providing the capability to easily grow the scope of automated tests
  • Providing a well-architected framework / foundation for future enhancements and adaptation to changes to the system under test
  • Being able to be self-funding (eg: benefits funding maintenance and incremental continuous improvement)

Note: As part of the evaluation process, consider giving the team a second chance to make their pitch.  Following the initial presentation, the agreed next steps might be to incorporate feedback, create a new blended option, do a pilot or conduct further research into how other teams have approached this sort of project.  After which, the proposal for the larger undertaking can be updated and be all the stronger upon revisiting.

Conclusion

Investing in test automation is an attractive idea and, if implemented successfully, it can deliver on many of its promises.  A well-described business case increases your ability, as the decision-maker, to confidently assess the viability of a given solution against competing alternatives.

Requiring a thorough business case can also help assess the understanding and capability of the organization / team to deliver the expected benefits given the full accounting of the costs involved.

Will the next test automation proposal to come to your desk be the best it can be?

Demand to be convinced that it is.

Posted in  All, Automation & Tools, Business of Testing, Planning for Quality | Tagged , , , , | Leave a comment

Distributed Teams – An Interview with Janet Gregory

I recently interviewed Janet Gregory on the topic of distributed teams. Janet is the founder of DragonFire Inc., an agile quality process consultancy and training firm. Janet co-wrote the original book on agile testing “Agile Testing: A Practical Guide for Testers and Agile Teams” and the follow up “More Agile Testing: Learning Journeys for the Whole Team” with co-author Lisa Crispin.  Janet is a frequent speaker at agile and testing software conferences, and she is a major contributor to the North American agile testing community. We wanted to share that conversation with you.

Time Zones, Frustration and Trust

Christin: When we talk about outsourcing testing, a lot of the time that means we are talking about distributed teams. There are many challenges to address when dealing with a distributed team. A big one, for me personally, is working with the time differences.

I think it causes a lot strain and not only related to scheduling meetings. What is your experience around dealing with having parts of the team in different time zones?

Janet: One of the main things that testers do is give feedback. The shorter the feedback cycle, the more you can act on that feedback. As soon as you have time zone differences, it can lengthen that feedback cycle. Depending on the time zones, you can ask a question and the answer might not come back for 12 hours. Then, if you didn’t understand, if you didn’t agree or if you have a follow-up question, you can send your response and wait for 12 more hours to see what comes back. It can actually take days to come to a shared understanding about something.

If you have to wait even two days to get that common understanding, what does that actually cost you in wasted time, sending it back and forth, and waiting, and then task switching? That is very hard to quantify and a lot of organizations don’t take it into account.

Christin: It probably causes a fair bit of frustration too. People don’t particularly thrive in frustrating environments.

Janet: Right. They don’t. I’ve worked on both sides with a team onshore and the same team offshore. When you listen to the different perspectives, you see that frustration leads to lack of trust. The people onshore, for example, will say, “Why don’t they understand what I’m trying to say the first time?” The people who are offshore, they go, “They don’t understand what we have to deal with, why can’t they…” And every time they say that, the trust degrades a little bit more – the trust that the team needs to work together as a whole.

Christin: You can end up in a circle where it just keeps getting worse. So what can you do to make it better?

Janet: I just did a presentation on distributed teams. It’s not only testers that have these issues, it’s programmers and everybody. We all need to work to really understand the other team. What is their culture? What are their issues? So many words or expressions we use here in North America don’t translate. They are not comparable or there’s a different understanding of what they mean. We have to watch what we say.

To get a bit more detailed, specific tasks can help clarify things. For example, instead of saying, “This is the requirement, I want you to do this” we say, “This is the requirement and here are tests to show what I want.” If you give real examples with an expected output “this is what I put in, this is what I expect”. It really removes some of the misunderstandings. It becomes much more transparent, much more precise.

Christin: What you’re saying is to use testing and tests to facilitate communication and information sharing to some extent?

Janet: When you think about acceptance testing and development, that’s what they’re doing. We don’t use it enough across organizations or distributed teams.

Cost Savings

Christin: The next thing I am curious about from your experience is, companies that have tried offshoring. For some, it hasn’t worked as well as hoped, and they now look to take the work back. What is the main reason you think some stick with it and others don’t?

Janet: It’s kind of funny because you think people would learn, but there are still people who do offshoring because they are just trying to save money. Within 5 years, they bring the work back onshore because they find that it isn’t working.

I have seen teams that have made offshoring work but that’s in the instance where they cannot find people in their area. They’re doing it for that reason, not for cost savings. If a company does it for that reason, then they will spend the money to make it work. They will fly people in to meet the people at the offshore location. They will fly people from there to meet the people here. There is cross-pollination and they get to know each other better. That seems to help immensely.

They will also spend extra money on things like screen-sharing and collaboration tools. Whereas, when a company offshores just to save money, they don’t seem to do that. And then it fails. It fails because of the sorts of issues we have been talking about.

Tools or Tips

Christin: You mentioned collaboration tools in relation to making it work when you have a distributed team. Do you have any favourite or special tools that help build those remote and distributed relationships?

Janet: I don’t have any specific tools, but a couple of things that I’ve seen to be very effective are, for example, using avatars in whatever tools they’re using for stories or requirements.  For instance, Janet is working on a task and it shows a picture of me. It helps people realize that there is a real person on the other side and to develop a closer relationship.  When you do meet them face-to-face, you have that vision and feel like you already know them. Sometimes those little tiny things make all the difference in the world.

Another thing that I’ve seen is, it really doesn’t work when you’re 12 hours apart.  Let’s say that you’re only 4 hours apart and have a few hours of overlap. Then you can do things like a joint mind map. That way you can brainstorm with other people all on the same mind map. There are a number of those tools that exist now. You can see the updates. You can have a conversation going on at the same time. Lisa Crispin and I do this when we’re working on a new presentation or working on a book together. We had mind maps that we can work on together with a chat or voice tool. It feels like real-time. It feels like you’re together.

Christin: That can make a huge difference. Sometimes we focus on the big things that are hard to tackle.

Janet: Yes, sometimes it’s just the little things.

Conflict

Christin: What every workplace has, and most teams have experienced at one point, is conflict. I have actually never seen an example of an outspoken or acknowledged conflict for distributed teams. I’m wondering if we’re more reluctant to acknowledge that there is a conflict if we’re not co-located or if maybe we never reach that phase of team functionality where we go through the storming before we all settle down.

Janet: That’s an interesting thought. Let’s think about PQA. Your people in Halifax and your people in Vancouver have a 4 hour time difference – I’m going to guess what happens is that the folks in Halifax, would talk about an issue internally and then, if you’re really lucky, one person will be brave enough to phone one of you Vancouver guys and say, “Hey Christin, we have an issue.” But chances are, they just talk about it on their end and decide what to do or what not to do and never tell anyone else.

What happens if you don’t agree with somebody in Halifax about what they’re doing? You have an issue with him or what he’s doing. If you’re his boss, if he reports to you, that’s a different relationship because you have to do it. But if you were just peers, chances are, you can just avoid any conflict because it’s easy just to keep doing what you’re doing and just ignore him. It’s really easy to ignore your peer when he is not in your location. So the conflict might never come up. You can’t ignore it so easily when it’s in the same office. So that would be something that I would probably see happen. It’s a truly interesting question.

Christin: It’s something that bothers me because rather than talking about it when it is a small thing it grows into something huge.  I’ve seen examples of that in organizations that have offshored. It’s never said out loud, but it’s all based on a conflict that could have been resolved by talking about it earlier.

Janet: When I first talked about frustrations leading to distrust, I think that, in order to have a healthy conflict, healthy conversation, you have to have trust. So they might go hand in hand. I was just at my sister’s and we didn’t get along for many years. For the past few years, we have been working hard to build back up the relationship. When I was last there for a visit, we had a little conflict. It wasn’t the first one and it won’t be the last. But, it would have been easy for me to just have come home. Instead, we actually talked about it, and we came to a resolution. It took away a little bit of trust but we will work on that because we both recognize it. If we had not had that conversation, if I had come home and let that eat at me, by the next time I go to visit her it would have had time to really affect me and I would have treated her differently because of it. It’s the same with teams.

Management Responsibility

Christin: We have that added complexity of cultural differences too. How do we resolve conflicts across cultures? As a manager, or whoever owns this relationship, don’t you have the responsibility to be a little bit concerned if it seems that things are running too smoothly?

Janet: As a manger, that’s where you need to go and visit the different centers to be better able to understand what the nuances are because you won’t do it on the phone. You actually literally have to go and say, “What are you guys doing?” Just sit there for a few days to observe and watch. Talk to people. It comes back to the managers watching and being aware of what’s happening and then being able to say, “Hey, we have this. Let’s talk about it.” Being able to facilitate that, and kind of help to make that happen. Set up a safe environment, and safe is, no blame, being able to talk, to allow for it to actually happen. Unfortunately, there are many managers that don’t have that ability because they’re managers and not leaders.

Christin: In a lot of organizations, the only way to advance your career is to become a manager whether it is a fit for you or not. Not everyone even likes being a manager.

Janet: Me! That’s one of the reasons I became a contractor way back when because I just got really tired of managing, I didn’t like it. Now I can influence teams in a totally different way, but I don’t have to manage them.

Distance Learning

Christin: If you were working with a client that came to you and said, we really need help to test this project, we’re probably going to look at working with a vendor. What kind of advice would you give them, and what should they think about when working with a vendor?

Janet: If a company is looking for a vendor, I would want somebody who could work with a little bit of flexibility. Not someone who just said, this is what we have to offer, take it or leave it.  I would also really like to know their hiring practices, how do they know and, how do they make sure that their people are up to speed on it all. What are they doing to make sure their people are trained or kept up-to-date in newest methodologies, or whatever it is?   If somebody has been on contract for the last 2 years in one place, have they lost their ability to change or to do new things? That’s something I would definitely care about.

Christin: We encourage people to take training, both our clients and ourselves. For the latest trends on testing, or a hands-on how-to workshop, we know we need interaction with each other. Do you have tips on how to get people interacting with each other as peers, especially when they aren’t all in the same room?

Janet: When you’re distributed, it’s a little bit harder. For example, I’ve done distance coaching and I was in a situation with several people in one room with me and 3 remote people as well.  Interestingly enough, the people in the room, whom I could see were occasionally having side conversations with each other. The folks that were remote, every time they would speak, their picture would come up. They were not on webcam but their picture would come up so I would know who was talking. There was some kind of discussion in between with them but I think that the facilitator, rather than the presenter, could get both conversations happening. They could say, “Robert, that was a really interesting comment. What do the rest of you think?” Get them to talk to each other and then step back. It would be a very forced facilitation but that’s how it has to happen at the start.

Another thing to consider, is not saying “I’m presenting this so you can learn”. Some organizations I know have book clubs. They can say, “Our goal is to learn more about exploratory testing. Let’s take a look at Elisabeth Hendrickson‘s book and every week we’re going to have a discussion about a different chapter.” And then, you can make it more about the conversation between the people. What did they learn? What do they think about this? Instead of “I’m teaching you something.”

Christin: I like that idea! This chat has been very good for me. A very interesting conversation. I’ve got a lot of new ideas.

Janet: Excellent! I have written my own notes down too. That whole thing about conflict has got me thinking.

See Janet’s presentation, “Agile Testing for Large Organizations and Distributed Teams

Posted in  All, Agile Testing, Planning for Quality, Team Building | Tagged , , , , , | Leave a comment

Is Functional Assurance the new Quality Assurance?

I have noticed a shift in how companies are attempting to ensure quality and this is what has led me to write this post. Companies desperately want to achieve quality by simply creating automated tests. Why do they want this? Simple, the speed at which we need to deliver software demands it. A human tester simply cannot test all that is needed within a short Agile sprint. Test automation can seem like the ultimate tool for solving this issue but I’m afraid we are only kidding ourselves by believing truly high-quality software can be achieved with only test automation.

Imagine your company only employs test Automation Engineers or Developers In Test. Think about how these workers approach quality; their goal is to construct a test in the form of a simple script.

For example an automated test script could:

  1. Select a setting.
  2. Tap a button.
  3. Assert that the expected screen appears.

Testing a feature in this way is extremely basic. Often times, this is what is done when scripting automated tests. Why is this the case? Because this is how we must instruct our test software to assess quality. Automation Engineer’s primary output is a test script. The problem with this approach is the minute they start implementing an automated test they have stopped thinking about quality. When a manual tester examines this same feature noted above the verifications and logic checks executing their head are in orders of magnitude more complex than this. The manual tester gets a sense of rough performance, usability as well as the simple dumbed down check that this automation provides. Why do we want to remove this type of intelligence when trying to build quality software?

Stop trying to oversimplify software quality. Use automation but understand its strengths and weaknesses. What we want from test automation is to ensure base functionality. Base functionality means our app runs and the primary paths through the software are working. That’s it. This baseline is certainly a very important aspect of software quality. However, it’s not going to get you 5 star reviews in the app store. Ask yourself the last time you saw a review like this:

Functional Assurance Brad Thompson 5 Star App ReviewAt this point, you might be thinking “whoa whoa whoa wait a minute are you saying we don’t care about crashes?” No, what I’m saying is our users expect this level of baseline quality. Guess what, your competitors most likely have basic functionality nailed already. We must use other strategies if we are to fully bring high quality to our software and beat the competition.

Now if we combine test automation with skilled QA engineers we can create brilliant software. If you are a manual tester and feel the pressure of test automation… don’t. Embrace test automation, it will help you bring real value to your team beyond verifying simple button tap functionality. Also realize one day test automation will be fully automatic and won’t necessarily need to be instructed to ensure your app simply functions. If you are a manual tester, find more important ways to bring value to your team.

  • Get closer to your users and understand what the pain points are in your app.
  • Work closely with development to ensure usability of a new feature works flawlessly.
  • Use your test experience to flush out problem areas.

For example, I start every day reviewing the Google Play store reviews. This practice enables me to make insightful quality decisions on a day to day basis. I know in doing this I’m bringing way more value than a simple test script. Companies should keep implementing test automation but also realize a highly skilled quality expert cannot be scripted.

Users expect your app to have a baseline quality and not crash when tapping a button to navigate to another screen. They want brilliant features presented at the right time, containing actions that make sense in the current context. They also want the transitions between screens to make sense and let them know where they are coming from and where they are now. Test automation cannot think and therefore cannot ensure this level of quality in your app. I challenge all manual QA workers out there to drive quality beyond simple validation checks. If you don’t, Quality Assurance will simply become Functional Assurance for your company.

Posted in  All, Agile Testing, Automation & Tools, Planning for Quality, Team Building, Test Planning & Strategy | Tagged , , , , , , | Leave a comment

Test Envisioning

Starting a project seems to be one of the most under-rated steps when it comes to describing critical success factors for projects. A project well-launched is like a well-designed set play that comes together on the field during a game. It’s not launch effort but launch effectiveness that is important. In Agile Testing and Quality Strategies: Discipline Over Rhetoric Scott Ambler specifically describes requirements envisioning and architecture envisioning as key elements of initiating a project (he specifies this for launching an agile project, and I’ve come to believe now that it’s true for any project with stakeholders).

In a classic standing on the shoulders of giants manner, “yes, and” … we could also spend some of that project initiation time envisioning what the testing is going to look like. It is Iteration 0, after all so there is a large landscape of possible conversations that we could have. And increasingly I’ve seen misconceptions about how the project testing will be conducted as one of those “wait a second” moments. People just weren’t on the same page, and they needed to be.

It’s a fundamental part of the working agreement that Iteration 0 is intended to create – who will test what, when? Will the developers use test-first programming? Will they also use higher level test automation such as one of the BDD tools? Will they do build verification testing beyond the automated tests? If the solution provider is a remote, out-sourced team, what testing will the solution acquirer organization do? When will they do it? How will they do it? Will they automate some tests too? Is there formal ITIL release management going on at the acquiring organization that will demand a test-last acceptance test phase? Will the contract require such testing?

You see my point. There are a lot of alternate pathways, a lot of questions, and it’s an important conversation. Even if some of the answers are unknown and require some experimentation, at least everyone should agree on the questions. Context matters.

I come back to the point about test envisioning. The result of such envisioning is a working agreement on how testing might be conducted. That working agreement might well be called a test strategy, albeit a lean one. That’s why I promote it as a poster first, document second (and only if it helps to resolve some communication need). What you put on that poster is driven by what conversations need to take place and what uncertainties exist.

To build on Scott’s description of Iteration 0 then, the work breakdown structure for Envisioning may include the following:

Iteration 0: Envisioning

  • Requirements Envisioning
  • Architecture Envisioning
  • Test Envisioning

and the result might very well be three big visible charts – one for each. Talking about testing upfront lets everyone listen, understand and contribute to the mental model melding that must take place for the team to be effective.

Posted in  All, Agile Testing, Test Planning & Strategy | Tagged , , , | Leave a comment