Maximizing the Value of Test Automation

High quality software delivered to market quickly has always been the goal of Agile teams. A common process teams use to achieve this goal is test automation. However, simply implementing test automation doesn’t always result in reaching this goal. Over the past year, the Android development team at Move Inc. has refined their test automation to deliver a high quality realtor.com app delivered at high speed. Through this process we have identified four key areas we needed to address: reliability, ownership (Who owns the test automation?), priority (How is test automation work prioritized?) and execution point (At what point in your process are tests being run?). By addressing these four areas we were able to unlock and maximize the full value of our test automation.

Reliability

test automation - reliability

Reliability in test automation is important to accurately and consistently measure the quality of software. If a test passes the first time it’s run but fails the second time when the app being tested has not changed, how will we interpret these results? Many factors can get in the way of reliability including synchronization issues, reliable test fixtures (data) and even some overlap into the ownership arena.

Synchronization issues occur when the speed at which your software runs is not always consistent. As a result, when a test attempts to perform a UI action, such as a button tap, the app may not have finished rendering yet. If your tests rely on live data sets this can also create problems with reliability as this data might not always be easy to retrieve from a large backend system. Finally, while not directly tied to reliability, ownership does factor into maintenance and upkeep of tests.

Our team has worked to address various reliability issues. First, we switched our test framework from Calabash to Espresso because Espresso has built in handling for synchronization issues. Tests can only continue when the app is in a state in which it can successfully proceed. We found handling synchronization issues using Calabash possible but ultimately resulted in increased test time by forcing increased wait times in tests.  Without these long waits, we could not guarantee the tests would not fail unexpectedly. This resulted in an increased wait time in excess of two hours to run through approximately 110 tests.

Espresso out of the box will pause your test execution if the UI thread is busy and then proceed immediately when the app is ready.  Espresso also allows you to directly launch into a specific screen (Activity) under test. This results in significantly reduced test time as not all tests need to traverse multiple screens before performing a test. The same tests which took two or more hours to run now execute in around 20 minutes. Espresso has enabled us to spend more time implementing new tests and less time dealing with synchronization issues. We also moved our test automation project directly into the app project. This allows our tests to directly reference resources in the app. Tests no longer break when a developer refactors a UI resource because both the app and the test are updated. It should be noted that the Espresso framework can only be used for testing native Android apps.

Finally, the last way we combat intermittent failures is by measuring when tests are reliable. We no longer add tests directly into our primary test suite before they prove themselves; our team created a Test Warden service that is responsible for tracking the health of all our tests.  We got this idea after seeing the 2014 google test conference presentation by Roy Williams “Never Send a Human to do a Machine’s Job – How Facebook uses bots to manage tests”. Each time a test is executed it reports whether it passed or failed. Only after passing 50 consecutive times can we trust it enough to accurately measure the quality of the software under test and then be moved into the primary test suite.  Consider it a probationary period for new tests.

The second area our team needed to address was accessing test data quickly. At Move Inc. we have access to tons of test data in the form of homes (listings). We prefer to use real data because it flushes out potential issues in our app and also underlying API layers; the problem with using real test data is how it would be accessed. Initially, we used SQL queries but these queries were taking a very long time to retrieve the data and sometimes no test data was found.  In order to fix this issue, the team created a dedicated test service called Graffiti. Passing tags (keywords), such as “for_sale + has_photos”, returns a test listing which is both for sale and has photos. This service is lightning fast at retrieving test data and helped immensely with increasing test speed.

Ownership

test automation - ownership

Ownership of test automation is also very important. Who will be responsible for implementing, maintaining, and reporting issues automation finds? Initially the QA team, including myself, owned this process from creating, running, maintaining, and reporting test results.  A number of issues arose when QA was the primary owner of automation. The first problem with this approach was knowledge sharing. The developers were not involved at all with test automation and thus had no idea what had coverage and how the tests worked. This made it extremely challenging for them to fix broken tests or interpret results. Another problem was reporting and visibility. QA would be the only ones to bring failed tests to the developers attention. This creates an unnecessary bottleneck in the flow of information.

Ownership is now shared between the QA and developers on our team. In this new partnership both groups benefit. QA gets access to developers to improve the way our automation framework is coded. After all, test automation is essentially a development effort. Developers now get insight in how QA tests specific features in our app and both groups now have an overall better view of what and how tests work. Any new features in our app by definition need automation around them to be considered done and developers are now jointly responsible for this effort. Any features which are already existing are the QA’s responsibility to implement. QA is also responsible for the reporting and general health. Maintenance of existing known good tests is now the responsibility of the developer who broke the test or experienced a test problem. This makes sense as our tests now are reliable and any failures directly identify problems a developer has introduced.

Priority

test automation - priority

Next up, we have priority of the automation effort. Is your team’s automation top priority or does it run on the side?  We can’t expect automation to bring full value without prioritizing this effort.  Automation on our team originally was run on the side. New automation work would often be de-prioritized in favour of new features in our app. The QA group would try their best to maintain existing tests as well as create new ones, keeping in mind there was still manual test work to be done. It seems strange that we had automation but because it wasn’t properly prioritized it didn’t bring the full value it could. Our test automation is now high priority and any new feature must have automation around it to be considered done. We leverage our top developers when we need to tackle difficult test framework issues and our test code now lives inside our app project. Finally, we learned to run this effort like a full blown product. We have a separate specific backlog which prioritizes automation work. If your team is not prioritizing your automation effort, then I wonder, how much value you are getting out of this process?

Execution Point

test automation - execution point

Finally, we have come to what I believe was the most impactful change we made to our automated process, the execution point. Originally for us we triggered our tests after a merge occurred. If you think about this execution point, it really is not the most valuable point to run automation. If code is merged into a branch before the quality of that code is verified then you are not allowing your existing test automation to bring full value. We test throughout the sprint and automation is being leveraged as a gating factor in our development process. The developers on our team create a Github pull request which contains a small feature in isolation. As soon as a pull request enters our system, our automated build and test jobs are executed. If the smoke tests were to fail, the developer would not be able to merge their work into the base branch. While this is logical, at first we found ourselves not following this process. This process was not picked up until it was enforced.

It is important to highlight all the great things that occurred after we changed our execution point:

  • Developers had to fix broken tests to get their code merged.
  • Potentially bad code was not allowed to enter branch.
  • Increased communication between team members.
  • Found bugs early. Tests can’t improve quality when they are executed downstream! It’s too late!

If you have test automation in place and want to unleash it’s full potential, try revisiting some of the key areas we did. With these changes in place, our team is now able to focus on adding more tests and ultimately increasing quality and speed to market.

Brad Thompson

About Brad Thompson

Brad Thompson is a Quality Engineering Manager with Move, Inc., having over 16 years of experience working in the software industry across a wide variety of sectors. Technical, with a drive for achieving quality, Brad has specialized in software test automation and is always working to improve the benefits automation can bring to his Agile projects. LinkedIn Profile
This entry was posted in  All, Agile Testing, Automation & Tools, Planning for Quality and tagged , , , , , , . Bookmark the permalink.