How many testers does it take to install a doorbell? Isolating a defect in real-life

As testers, we tend to be analytical thinkers by nature.  This is a valuable quality in our work, but how often do we take note of how we exercise those skills in other areas of our life?

The following is a true story of three software testers attempting to install a wireless doorbell system and the parallels between this task and a typical software development process.

Requirements

The requirements for this project were pretty simple.  In an office with an exterior door to the street, without a dedicated receptionist, a system was needed to notify staff whenever somebody entered the office.

Implementation

To satisfy these requirements, an off-the-shelf product was selected.  The chosen product was advertised as a “portable wireless chime & push button” which included the following features:

• 6 chime tunes with CD quality sound and fully adjustable volume control
• Individually coded bell push button to avoid interference with other users
• 450 ft. range (137 meters)
• Low battery indicator
• Easy installation – no wires and no need to program the included bell push button

Easy installation was the biggest selling point for this particular unit. (It was also the source of what would prove to be a false sense of optimism.)

With the product chosen and purchased, the integration work for this system (installation) would be done by internal staff.  Because installation was a multi-step process, we made sure to test it at each step.

Unit Test

The first step was to test each of the individual components while they were laid out on a table.  These components included:

• A set of door magnets connected to a wireless transmitter
• A separate wireless push button for the exterior
• A base unit with a wireless receiver and chime

Initial unit tests all passed.  Pressing the exterior push button unit activated the chime as intended and so did separating the door magnets.  It seemed we were ready to proceed with the integration.

Integration Test

The next step involved mounting one of the two door magnets, the wireless transmitter and the exterior push button, then testing that they were still able to activate the chime on the base unit.  At this point, everything continued to work as expected, so it was time for the full deployment.

Smoke Test

After mounting the second door magnet and the base unit, deployment was complete and the system was ready for an initial smoke test.  Since the door magnets were the highest priority items, they were the first component to be tested.  To perform the test, we would simply open the door and confirm that the chime on the base unit activated.

Unfortunately, the actual behaviour that was observed differed from the expected behaviour.  In this case, nothing happened at all.  It would seem that we had a defect.

Isolating the Defect

In order to correct the issue, we first needed to investigate and determine the root cause.  Our first hunch was that, when attached to the door and frame, the magnets were no longer making contact.  Adjusting their positions and forcing contact proved that this was not the cause after all.

Next, we considered that the metal door may be interfering with the magnets, but removing them from the door disproved that theory as well.

We thought that perhaps there was some other interference between the wireless transmitter and the base unit, which was now installed on the other side of the room.  We removed the base from the wall and placed it near the transmitter once again, eliminating that as possibility.

Did a wire come loose when we were installing the components?  No, that wasn’t it.  Maybe the transmitter and receiver had somehow become un-paired?  No, that wasn’t it either.

The list of possibilities was shrinking quickly.  What else could possibly be causing the problem?  Then, we had a thought.  What if the battery in the transmitter was dead?  That seemed improbable though, because we had already tested each of the components previously.  Still, it was worth testing again just to be sure.

Using the battery from the exterior push-button unit, we tested the theory and Eureka!  That was it!  After all the setup, testing and troubleshooting we had done, a single dead battery was the cause of all our problems.

Summary

Looking back, what did we learn through all this?  We learned that, similar to the process of isolating a defect in the code we are testing, an analytical approach to troubleshooting helps us quickly eliminate possible causes of a problem and narrow in on the true root cause.

On the other hand, our trial-and-error approach to troubleshooting meant that we investigated many other possible causes before isolating the real issue.  By following a different approach, might we have found the issue even more quickly?

In this case, the same team was involved in all phases of the project, from selecting the product, to performing the integration and, ultimately, performing the testing.  Is it possible that our detailed knowledge of the system affected our perspective on the situation?  As testers, we sometimes assume complicated root causes and the risk of this can increase as we become more knowledgeable about the inner-workings of the system we are testing.  This highlights the value of the external perspective offered by a team of independent testers.

We also learned the importance of challenging assumptions.  In our case, we initially assumed that the batteries that were included with the unit were fully (or mostly) charged.  Had we continued to accept this assumption as fact, we would never have found and corrected the issue at all.

And, finally, we learned while testers may be good at testing software, doorbell installation may best be left to professionals from other disciplines.

Posted in  All, Other | Tagged | Comments Off on How many testers does it take to install a doorbell? Isolating a defect in real-life

When To Get Serious About Testing

Some years back, I taught a course on software configuration management. In that course, we discussed the benefits and drawbacks to integrating change control within the software development process considering that the software is being actively developed or matured during that process. A concept that I presented in the course was to apply only the level of control that was appropriate for the phase or maturity the project/software had reached.

This same concept is applicable when deciding how to establish the most useful balance between providing early feedback and performing “serious” testing. Continue reading

Posted in  All, Agile Testing, Planning for Quality, Test Planning & Strategy | Tagged , , | Comments Off on When To Get Serious About Testing

Exploratory Testing on Agile Projects

Exploratory testing provides both flexibility and speed, characteristics that have become increasingly important with the quick pace of short agile iterations. But, how do you retain traceability without losing your creativity? The answer is xBTM – a combination of session-based test management (SBTM) and thread-based test management (TBTM).

On January 15th, 2013, Christin Wiedemann presented “Exploratory Testing on Agile Projects – Effective, Efficient and Engaging” to the Calgary SQDG User Group.

In SBTM, exploratory testing is structured and documented in sessions. However, at times the work environment is too hectic or chaotic and requires the flexibility and freedom that is provided by TBTM. xBTM unites the two exploratory techniques to get the full advantage of both, focusing on increased test efficiency and creation only of artifacts that actually add value. In this talk, Christin discussed the difference between SBTM and TBTM and demonstrated how the two methods can be combined for best efficiency. Using a mock example, participants were walked through an xBTM workflow on an agile project, covering all steps from test planning to test reporting. The focus was on practical examples and providing a range of flexible tools that can be immediately applied on almost any project.

View the full presentation …

Posted in  All, Agile Testing, Test Planning & Strategy | Tagged , , , | Comments Off on Exploratory Testing on Agile Projects

Meaning of Quality: Different Things to Different People

In my first job following university, I sold financial products on commission.  In this role, I had to memorize a script which I can still remember more than 20 years later.

“Financial planning means different things to different people.  To some, it means three square meals a day and a six-pack at the end of the week.  To others, it refers to complicated investment portfolios.  What does financial planning mean to you?”  

plan pyramidThis part of the sales pitch was intended to engage the prospective customer, and pinpoint their level of financial need and sophistication.  Based on that need, the rest of the sales approach could be adjusted to respond to the preferences and priorities of the customer, leading to a sale of an appropriate financial package.

When I taught Software Quality, I would begin my classes with a similar approach, asking about the types of product and company represented in the classrooms.  This was done for two reasons:

  • Determine the relative level of knowledge and sophistication of my students
  • Identify where to emphasize and prioritize my message.

start chartThe outcome would be a quadrant chart reflective of the types of organizations and software applications.  This knowledge would be helpful for me as an instructor to know how to connect and establish the relevance of my subject material, and helping my students to connect what they learned in the classroom with their workplace challenges.

For example, topics like Configuration Management, Change Management, and Release Management can vary, depending on the complexity of the product, as well as the company culture.   An approach that would be suitable for managing software for Aerospace or Medical Imaging (with legal compliance issues) would be overkill and counterproductive for a more entrepreneurial operation in an unregulated environment.  For example, a two-person operation making mobile gaming apps would not serve its best interests by establishing a rigid Change Control Board, even though this approach is specified in ISO, IEEE, and CMMI references.

So while Quality can be defined at a high level, the application of those Quality principles will vary based on the context of the systems, solutions, regulatory environment, market demands, and capabilities of those providing deliverables to customers.  The optimal solution will be affected by the various inputs affecting outcomes.

This consideration is also important when a Quality practitioner is shifting across industries, product types, or  organizational cultures.    The expertise of a qualified and experienced Quality practitioner is necessary to make the appropriate determination of the optimal Quality solution, thus validating my proposed definition of Quality.

Pursuit of optimal solutions contributing to confirmed successes fulfilling accountabilities.

Posted in  All, Planning for Quality | Tagged , | Comments Off on Meaning of Quality: Different Things to Different People

xBTM: Harnessing the Power of Exploratory Testing

Exploratory testing provides both flexibility and speed, characteristics that have become increasingly important, especially with the quick pace of short agile iterations.  But, how do you retain traceability in exploratory testing without losing your creativity? How do you, as a manager, actually manage testing that is unscripted and improvised? One answer is to use a combination of session-based test management (SBTM) and thread-based test management (TBTM) called xBTM.

Session-based Test Management (SBTM)

SBTM (ref: Jonathan Bach, 2000, http://www.satisfice.com) is the reply to the common misconception that exploratory testing is, by its very nature, always unplanned, undocumented and unmeasurable. Using SBTM, exploratory testing is done in time-boxed sessions. Each session has a mission that is specified in the test charter, for example, the sessions are actually planned ahead (but not scripted). A session results in a session report that provides a reviewable record of the testing that was done. Based on the session report, the test manager can derive various metrics to track the progress of the testing. SBTM doesn’t have much in common with ”ad hoc” testing, in fact – SBTM is quite strict, sometimes even too strict.

Thread-based Test Management (TBTM)

There are environments too chaotic or hectic for SBTM to work properly, and this led to the introduction of TBTM (ref: James Bach, 2010, http://www.satisfice.com). TBTM is a generalization of SBTM that allows for more freedom and flexibility. The core of TBTM is threads and, similarly to sessions, threads are test activities intended to reach a certain test goal. The big difference is that whereas a session is a commitment to complete a specific task in a given amount of time, threads come with no such commitments. A thread can be interrupted, resumed, canceled or go on indefinitely. SBTM defines both where to start and where to end, but TBTM only defines where to start. In TBTM, activities are allowed to change over time. TBTM is the free-spirited sister of SBTM.

SBTM -> TBTM -> xBTM

SBTM and TBTM both have their strengths and weaknesses, and rather than having to choose one over the other, xBTM was created (ref: http://www.addq.se/utforskande-testmetodik-xbtm). xBTM unites the two exploratory techniques to get the full advantage of both, focusing on increased test efficiency and creation only of artifacts that actually add value. The name xBTM highlights that it is combination of SBTM and TBTM (x = S or T).
The xBTM workflow is centered around a mind map:
1. List all test ideas and activities – these are your threads.
2. Arrange all threads in a mind map grouped by function area or test technique. This constitutes the test plan.
3. Prioritize the threads.
4. Use SBTM when possible. Group threads to create suitable charters and create session reports once the charters are executed.
5. Use TBTM when SBTM is not an option. Follow the threads where they take you and add new threads when needed.
6. Update the mind map continuously, showing the progress and state of the threads. The mind map is the status report.

Using a mind map, rather than a traditional text document, makes conveying the test plan easier, and it helps communication — not only within the test team but across the whole project team. Figure 1 shows a simplified example of what the xBTM mind map for testing an e-shop might look like.

xBTM MindmapFigure 1: Example of an xBTM mind map. Here, an e-shop is being tested and the mind map shows some of the threads. For example, “Create user account” is being tested in a session, whereas “Add item to shopping cart” is being tested as a thread.

Want more?

Want to know more about xBTM? A detailed walk-through of applying xBTM on a real project is given here: Follow-up on xBTM.

Also, look for our workshop ”Exploratory Testing on Agile Projects” at STAREAST in 2013.

Links

There are many different mind map tools to choose from and, in most cases, the core features are very similar. Which one you end up picking will mainly be a question of personal taste. The author has had very good experiences using the following three tools:

  • XMind – http://www.xmind.net: A powerful tool with a good selection of icons and markers. The base version is free and there is a pro version available for purchase. XMind was used to create the example in this article.
  • Mindmeister – http://www.mindmeister.com: Collaborative tool that stores mind maps online and allows multiple users to edit the same mind map simultaneously. The free version limits the number of mind maps you can create and store.
  • FreeMind – http://freemind.sourceforge.net/wiki/index.php/Main_Page: The simpler of the mind mapping tools, but still very useful and completely free.

Additionally there are a number of tools that help you manage your session-based testing. Most of these tools focus on recording the session reports. Three tools which the author has used in the past are:

Posted in  All, Agile Testing, Test Planning & Strategy | Tagged , , , | Comments Off on xBTM: Harnessing the Power of Exploratory Testing

Release Criteria – What Is Your ‘Quality Bar’?

In a previous article, I discussed managing risk with quality gates (“None Shall Pass…unless? Managing Risk with Quality Gates”). Such gates and their expectations facilitate tracing issue root cause and examining which preventative measures failed or need to be enhanced so as to continue to lower costs associated with poor quality.

But that last gate is there to enforce a level of standard, or ‘quality bar’, for what is ultimately to be published, deployed, or otherwise seen outside of the project team itself.  To guard that gate, we need “release criteria”.

Continue reading

Posted in  All, Planning for Quality, Test Planning & Strategy | Tagged , , , | Comments Off on Release Criteria – What Is Your ‘Quality Bar’?

Jumping Into Mobile Application Testing – A Continuously Moving Target

Adoption of mobile devices is at an all-time high and the demographics of the technology market are changing. If the user base changes, user acceptance testing must necessarily change. You can probably trust a Linux user running your command line app to steer pretty close to the intended workflow, but mobile devices are now in the hands of people from all walks of life and technology experience levels. According to a study done by mobile analytics company Flurry:

“Compared to recent technologies, smart device adoption is being adopted 10X faster than that of the 80s PC revolution, 2X faster than that of 90s Internet Boom and 3X faster than that of recent social network adoption. Five years into the smart device growth curve, expansion of this new technology is rapidly expanding beyond early adopter markets such as North America and Western Europe, creating a true worldwide addressable market.“ (http://blog.utest.com/).

And if mobile devices are in everyone’s pocket, it means that they are everywhere. This brings us to yet another configuration variable unique to the mobile market: carriers. The iPhone and/or iPad alone are available through more than 100 different carriers worldwide. Here’s a quick (approximate) region by-region breakdown:

  • North America: 11 carriers
  • Europe: 44 carriers
  • South and Latin America: 19 carriers
  • Asia: 35 carriers
  • Middle East: 12 carriers
  • Africa: 7 carriers

Each carrier presents a unique coverage footprint with varying signal strengths throughout to consider. How does your app perform with weak or overloaded signals? There is also the increasing divide between 3G and 4G coverage (http://c954852.r52.cf0.rackcdn.com/).

This is one of the key challenges of mobile application testing: to keep your test matrix from getting out of control under so many configuration variables. I expect that risk management (as usual) will continue to have a large role to play in this, but it is also worth mentioning that the quick adoption of mobile devices has the almost Newtonian side effect of causing quick, relentless obsolescence. Apple has already discontinued iOS 3 (released 4 years ago; compare that with Microsoft’s plans to discontinue support for the now 11-year old Windows XP platform in 2014 http://www.theregister.co.uk).

The same applies to several older iOS devices, including the first two iPhone models (with the first iPhone release being relatively recent, in January 2007 http://en.wikipedia.org/). Quick change, quick adoption and quick obsolescence means that there is relatively little risk in limiting your testing scope to the current versions, an observation which appears to be consistent with the configuration lists I’ve seen in real-world mobile test plans. More carriers, features and regions are also to be expected though.

To end on a fun note, I challenge you to complete this test on mobile apps (the last question is especially dedicated to testers):

  • True or False: “App” was the American Dialect Society’s 2010 Word of the Year.
  • True or False: As of 2012, the iOS operating system has greater market share than Android.
  • True or False: Angry Birds is the #1 downloaded app of all-time.
  • True or False: There are over 6k distinct Android devices.
  • True or False: More people spend time on the mobile web versus native apps.
  • True or False: The Android version of Siri is named Veronica.
  • True or False: Draw Something was developed by Rovio.
  • True or False: Apple Co-Founder Steve Wozniak owns (and loves) a Windows Phone.
  • True or False: Free apps have faster load times than paid apps.
  • True or False: Android apps crash with greater frequency than iOS.

Please visit below for the answers:

(http://www.mobileapptesting.com)

Posted in  All, Other | Tagged , | Comments Off on Jumping Into Mobile Application Testing – A Continuously Moving Target

IT Consulting – Sales Experience Required?

Sales experience needed in an IT consulting role?

OK, first answer these questions…

Who are you?
What do you do?
What do you love?

Then, consider…

Continue reading

Posted in  All, Other, Team Building | Tagged , , , | Comments Off on IT Consulting – Sales Experience Required?

Model-Based Testing – Learning Experience

As a software tester, I find that using models is very helpful. Models allow us to understand a business process or how a system should behave in a given situation. In my practice of software testing, I keep sketches of a system’s behavior in various forms: UML, data flow, flowchart and so on, while learning a new solution and then translating these sketches into Excel in one big diagram. This has also provided a great tool to confirm my understanding of the system’s functionality with business users and developers. I always wished there was a way to generate/develop the test cases straight out of my diagrams…

With model-based testing, the model of the system’s behavior is made explicit (not only in the tester’s head for a brief time) and used as the basis for complete automation of testing. Based on my experience, model-based testing is both easy and complex due to unfamiliarity with the tools chosen for this research. I started researching model-based testing, its background and where it is today in the world of software testing. Harry Robinson, one of the proponents of this technique, has used model-based testing in AT&T Bell Labs, HP, Microsoft and recently Google (http://model-based-testing.info/2012/03/12/interview-with-harry-robinson/). Surprisingly, model-based testing is not as widely used in North America as it is today in Sweden, Estonia, Germany and many other parts of Europe where they hold annual user conferences and various user groups.

Model-based testing is an approach based on creating test cases from models describing expected behavior (usually functional) of the system under test. I thought this was a nice testing concept but the question is: how is it executed and how effective is it? A wide range of model tools can be used in model-based testing but, for this research, we chose to explore a modeling tool (yEd http://www.yworks.com/en/products_yed_about.html) and a tool that will enable test case generation from the models (GraphWalker http://graphwalker.org/) that are created.

Learning the modeling part is simple, although it requires a different way of thinking. Perhaps many of us are more familiar with flowcharts – model-based testing uses the theory of finite state machine. A finite what? A turnstile is a classic example of finite state machine. It has two states, locked and unlocked, and there are two inputs that affect its state, the coin input and the arm pushing input. In the locked state, putting a coin in the slot shifts the state from locked to unlocked and a person pushing through the arms shifts the state back to locked. Once you have understood how to model, it becomes easy. However, like any testing preparation activity, modeling can be time consuming but a well-designed model is invaluable.

The next step in model-based testing is the generation of test cases from the model. This can take a long time to set up and get running – and involves a lot of trial and error in fixing the model. After getting past this hurdle, the next challenge is how to verify that the generated test cases are correct. This has led me to research and learn the basics of various algorithms that the tool uses to create the test cases.

The tool generates test cases by traversing various paths outlined in the model and uses combinations of parameters on how test cases should be created. Because model-based testing generates test cases from a model based on functional requirements, it is easy to change them when the requirements change. A simple adjustment in the model can generate new test cases. Testers’ time can be used more efficiently by executing test cases rather than manually maintaining them.

Model-based testing has already proven its worth and, even though it will not be the right solution for all projects, it is worthwhile exploring in more detail how much value it will add and what testing skills will be required to perform model-based testing. With the basic concept of model-based testing presented in this article, are you interested in using model-based testing? What types of application would you like to see it being used on?

Posted in  All, Requirements & Testing, Test Planning & Strategy | Tagged , , , , | Comments Off on Model-Based Testing – Learning Experience

Jumping Into Mobile Application Testing – Too Many Configurations?

At the end of our previous installment, we touched on the fact that Apple instills tight control on its iOS. At the other end of the spectrum, you have the open Android platform; widely embraced in a society where freedom of information is a growing concern. But openness is a double-edged sword, and Android devices have already faced some issues with malware. It isn’t much of a stretch to assume that spyware, bloatware and especially junkware are trailing right behind or are out there already; after all, “Developers of malicious software are nothing if not creative” (http://www.quickstonesoftware.com/blog/2011/06/07/android-market-time-for-certification). And, this is all happening on devices many believe are posed to replace your credit cards by the year 2020 (http://pewinternet.org/Media-Mentions/2012/Mobile-Payments-May-Replace-Cash-Credit-Cards-by-2020-STUDY.aspx).

Jakob Nielsen, “The King of Usability,” identified three advantages of native apps, and I was pleasantly surprised to find that one of them is directly related to testing:

  • Empirically, users perform better with apps than with mobile sites in user testing [emphasis added].
  • Apps are much better at supporting disconnected use and poor connectivity, both of which will continue to be important use cases for years to come. When I’m in London and don’t feel like being robbed by “roaming” fees, any native mapping app will beat Google Maps at getting me to the British Museum.
  • Apps can be optimized for the specific hardware on each device. This will become more important in the future, as we get a broader range of devices.

(http://blog.utest.com/testing-the-limits-with-jakob-nielsen-part-i/2011/04/)

It could be argued that testing web mobile apps presents a narrower challenge, and while it certainly seems like less of a headache than testing multi-platform ports, there’s still quite a bit to do. In a perfect world, you would be able to create your mobile web app using standard tools and expect it to behave similarly across all platforms; leaving the portability work to the natively-coded interpreters in the browser (can we please call this the Java pipe dream?) A perfect world ours is not.

“A test configuration is a set of configuration variables that specify the correct setup required for testing an application. …The configuration variables include hardware, operating system, software, and any other characteristics that are important to use when you run the tests. Each test configuration can represent an entry in your test matrix.” (http://msdn.microsoft.com/en-us/library/dd286643.aspx)

Software testing as a discipline was relatively young when the first cellphones supporting third-party apps came around, but I can only imagine what the test configurations included in a test matrix might have looked like in those days. Every manufacturer had its own proprietary hardware design and OS, and the amount of phone offerings in the market was overwhelming. Developers turned to Java and, while this was logically the sensible option at that point, I firmly believe that in this case, a good old monopoly was welcomed by developers and testers around the world. These days, Android has just captured over half the market, followed by Apple with around a third and RIM (Blackberry) hanging in with just under half of that. Windows phones continue to trail in the market, struggling to reach 4% (http://www.comscoredatamine.com/2012/04/android-captures-majority-share-of-us-smartphone-market/).

The configurations list I’m looking at for my current assignment (mobile web app) has been narrowed down to a handful of devices and eight emulators mimicking a few OS releases. iPhone 4S and iPad 3 are in there, but the Android offering is reduced to the Samsung Galaxy (two models thereof). Risk management is a big part of software testing, so I’m kind of wondering if Sony should have been factored in that equation as well, being that they just came out with their shiny brand new HD, PlayStation-compatible, Xperia phones. The Blackberry makes an appearance as well with the 9900. Because the client is very interested in portability testing, the Blackberry becomes an interesting case study here. While the Android design takes more than a few cues from iPhone, the Blackberry is a distinctively different device. I must admit I am somewhat disappointed to see the 9900 featuring a touch-screen, as I was really looking forward to seeing how the app would perform in the complete absence of one (logically, this can still be attained through test case restrictions, but I guess I’m a purist at heart).

In any case, the future of test matrix complications derived from user interaction is still bright, with styluses making a comeback, voice commands being given yet another chance with Siri and split-screens being produced by some manufacturers.

I have waited until this point in the article to discuss the elephant in the room: we carry all of these networked devices around in our pockets. Have you ever considered how neat it is that your app display rotates to accommodate using the phone in portrait or landscape mode? BAM, your layout test cases just doubled. Also, they are phones, what happens if they ring while the app is running? Welcome to the magical world of interrupt testing (http://en.wikipedia.org/wiki/Mobile_application_testing). They have batteries and no back-up power – what does your app do if the battery disconnects or runs dry? What happens to the data? They are connected to a wireless data network that is sensitive to location, geography and weather conditions – what happens when the signal is gone?

Stay tuned for next month’s installment where I’ll talk about the challenges of – “A Continuously Moving Target”.

Posted in  All, Other | Tagged , | Comments Off on Jumping Into Mobile Application Testing – Too Many Configurations?