“Project: Risky Decision” – a Card Game

I designed this teaching game in 2014 as a different sort of hands-on activity to supplement my Software Testing & Risk Management training sessions and presentations, specifically to highlight certain concepts around risk mitigation strategies.

This version of “Project: Risky Decision”, updated in 2020, is made available here as a free print-and-play card game in hopes it might do the same for you and your teams!


Project: Risky Decision is a fast-paced game of probabilities and eventualities where players take on the roles of Software Project Managers trying to release their projects successfully. Your company will rise or fall upon the success or failure of your risky decisions.

PlayGames2Learn.com - Project: Risky Decision
Download this free print-and-play card game

Players must complete each game without exceeding the Impact Limit from realized risks that their project can absorb and still be considered a success.

Players mitigate the Likelihood and Impact values of their project risks with project events.

  • A lower Likelihood means a better chance of discarding the risk.
  • A lower Impact means fewer points accumulating towards the Impact Limit.

This is a free print-and-play card game.  Download here.


Related Articles & Presentations

Here are a few of the more relevant articles and presentations from this site about risk-based testing and risk management. Think about them as you play the game and vice-versa.


Thanks for Playing!

PlayGames2Learn.com - Project: RiskyDecision
A picture of when the cards still contained placeholder text and values

Posted in  All, Planning for Quality, Risk & Testing, Team Building, Test Planning & Strategy | Tagged , , , , , , | Leave a comment

The Difficult Journey – Building Quality Software

The Difficult Journey - Building Quality Software

The goal of building software is to produce a product that will bring value to our customers. As software developers we have a reputation for finding and embracing new technologies. These new technologies, however, can only take us so far in ensuring the products we build are high quality and bring maximum value to our customers.

I fill the Quality Engineer role on my development team and I have been involved in software development for many years. What I now see in the software industry is a lack of customer focus and an over focus on technologies, processes that move the quality problem from a customer value problem to a mechanical technical one.

The thoughts written below have been floating around in my head nagging at me for the past couple of years and I wanted to share them with others to see if they resonate with anyone out there. I will walk through many of the challenges we face today when attempting to build quality software as well as possible solutions.

QUALITY

Quality is completely subjective. Ask each of your team members to describe what makes a piece of software high quality and you will get a number of different responses. These different perceptions of quality make it very difficult to come up with a strategy which will add up to building a high quality piece of software.

Lets us explore what our customer might see or perceive as quality vs what activities / behaviors / processes we use to attempt to ensure it.

Customer

  • Solves their use case in a simple way which in turn brings them value
  • Easy to navigate and find / discover features they care about
  • Provides more value than the competitor
  • Can continue their journey when they return to the software
  • Familiar design patterns applied to customers platform
  • How well software interacts with other software in which they are using

Development Team

  • How fast we can deliver
  • How many features can we deliver
  • Test coverage numbers
  • Checks / Gates integrated through a CI (Continuous Integration) pipe
  • Using the latest technologies
  • Metrics which internally we decide represent quality

As you can see, the customer’s perspective of quality and the activities / behaviors / processes we use in an attempt to ensure quality don’t line up. I believe this causes us to sometimes assert things that don’t always cover the entire quality problem.

We need to understand what quality means to our customers. They are our North Star let’s follow them.

DISTANCE FROM THE CUSTOMER

Multiple teams are often involved in building software. These teams can sometimes be assembled by layer, for example, front-end and back-end. Other times teams are focused very narrowly on one single widget on a website. The trouble here is we optimize for delivery speed without necessarily thinking enough about the customer or how these narrow teams will bring value or quality to the customer.

The distance from our customer here hurts our abilities to ensure our software meets their need and ultimately brings them value.

When we don’t keep our customers, our North Star close and constantly understand their needs we cannot bring full value to them. This distance impacts how we test and ultimately leads to lower quality software.

TESTING

Testing software has slowly become about what we already know. The software we build is constantly changing so simply repeating the same tests over and over doesn’t cut it. Thinking about testing needs to increase to understand what we don’t know. When we assert what we already know over and over we risk missing important discoveries that threaten our product quality.

Review the testing being performed by your teams and you will find activities that are quick to jump to locking / coding in automated assertions before deeply thinking about testing.

We have lost what matters the most in software testing, “the thinking”. This is not about automation being a bad thing. In fact automation and static assertions are fantastic at giving us a reliable quality baseline. Once an automation baseline is established this will allow your team to spend more time thinking and exploratory testing.

Thinking during test creation as well as observing test execution results will lead to a deeper understanding of how our software works. This continuous thinking often exposes not only functional issues but even deeper insights about your software. These uncovered insights are then turned into important future questions / tests.

This activity is what makes ok software turn into a magically solution for your customers. Move the quality bar up for your teams.

PEOPLE

What types of people are we hiring to ensure the software we build brings value to our customers? Are we hiring all the same type of people with very similar mindsets? Whoever fills this role must have a testing mindset – an ability to break up complex software and interrogate it, exposing risk early in our development process.

Strangely even when we hire QA or someone with the word “Quality” in their title we sometimes measure the value they bring to the company as the number of automated tests they have implemented not by how much risk they have uncovered.

I really feel this is not how we should measure a person who is testing on a software team. When testing, we are after exposing risk and also looking for opportunities to make our software better. Automation although very important is just one aspect of the required work.

If a team member regardless of their title understands your customer and consistently uncovers risk and opportunities in your software this person is valuable. You can call this person a QA, a tester or whatever you like. Take a look at what this person might be able to do to help make the team more effective at software testing and accomplishing your goals.

Don’t fall for the phrase software testers or QA is not scalable. We don’t need to hire large groups of QA for each layer of our software but I don’t want to hear testing or a person in charge of uncovering risk is not scalable. What’s not scalable is delivering software incrementally at high speed that does not provide value to your customers.

PROCESS

The shift to Agile although a good one has forced software testing to change. In some ways I feel this constrained test time is good. It should make what we deliver smaller and thus easier to ensure a high quality bar. The trouble is we often want to deliver so much so fast that there is not time for adequate testing. So what happens in this scenario?

We only focus on the basics of our features. We automate those and move on.

Even if you manage to release without any major functional issues chances are the software will only be at best functional. We should want more than this. We want great usability, performance and features that work intuitively. If there is not time for this we just keep delivering more and more average software.

ACCOUNTABILITY

If a major issue occurs in production that impacts your customers is anyone accountable? I’m not talking about walking over to someone’s desk and yelling about a missed issue but is anyone accountable? If no one takes direct responsibility for these quality mishaps it’s not long before the quality bar erodes lower and lower. Then you might hear things like:

It’s ok, our response time to fix things in production is extremely fast. The speed at which you can recover from problems is not the point. It’s great if you can recover quickly as sometimes teams will make mistakes but don’t lean on this mechanism over and over. Reward your team delivering quality out of the gate. Don’t reward for recovery from a problem which impacts your customers.

Your customer takes the brunt of this negative pattern. Your team needs to own your product and be accountable for issues that impact our customers. Talk about these mistakes and ask your teams how they might mitigate these types of issues in the future.

OWNERSHIP

Ownership can also be problematic when it comes to working towards building a quality product. Ideally, we want each member of the team to own quality. The real ask here is to have passion for what you are building.

When we simply say we all need to care about quality I believe this can wash away this goal. It gets watered down because no one person is accountable or has ownership.

Check the behavior of your teammates. If you’re lucky a few people will have passion and care about customer value. Others hide away from this ownership. Have a good look at who owns quality on your team and how that impacts the quality of your software.

REWARD

Reward and recognition is very important to ensure people remain motivated and appreciated for their efforts. Trouble is sometimes we over celebrate or reward internal achievements before the customer validates this as being a success. We have all seen it, the email chain floating around high-fiving teams internally. Meanwhile, customers are encountering quality problems in production. I know I might sound negative here but your customer will decide whether you have achieved success, not your internal team.

Let’s recognize our work internally but when our customers are happy then let’s really start celebrating success. This is what matters most.

MEASUREMENT / METRICS

Ah measurement. We all want to measure everything as it helps us make decisions. The problem with software quality is what do we measure. What tells us the right answer? As noted in an above point, quality is subjective so how can we get the right metrics?

Should we say we have 80% unit test coverage so we are high quality? Should we say we have executed x number of test cases so therefore we are high quality?

We can measure internal results but we really need to bring customer satisfaction with the software into the question. This is harder to do and that’s why I believe we stick to the number of automated tests to measure quality.

Find a way to measure how you are doing against your customer’s expectations. Figure out how to bring customers into your quality metrics.

DEATH BY CANARY

After the team has decided that enough testing has been done to ensure we are good to release to production we will often use a canary mechanism to slowly release or try out our new build against production users. This is a great mechanism to use as we do sometimes make small mistakes and knowing early and being able to mitigate these issues is a great idea.

With that being said if you find that every release you have is having issues then you are most likely abusing this mechanism. We shouldn’t use our valuable customers as crowd non-paid testers. Customers are real and when they encounter problems in production they may not come back.

Let’s not ignore the cost to our customers when we abuse this canary pattern. Yes we need it but if you have consecutive releases which need rolling back ask yourselves what might be the bigger issue happening in your development process.

DEATH BY PAPER CUT

We are often looking to the next feature because of the internal “Reward” I mentioned above but the fact is sometimes there are many small quality issues which need to be fixed. If we continually choose future work over fixing many of the existing small usability and functional issues we encounter quality debt. Fix these small customer problems. You will be surprised how much happier your customers will be.

Small fixes / tweaks to existing features are often more impactful than a future idea you have in your mind.


Most companies have the best of intentions in trying to build quality software that brings high value to their customers. Problem is they don’t live and breathe the customer. Let us explore some ideas below that may help us build better quality software.

SOLUTIONS

CONTINUOUS TESTING / EXPLORING

Just as early explorers looking for new lands did not sail the same routes each day it makes sense that if we want to discover potential defects or risks which impact our software we have to be continuously exploring. Specifically thinking about what might / could happen and its impact. The moment you stop this thinking your test approach is frozen. You cannot expect to discover everything about your software if you stop thinking about it.

Challenge the product your building. Go beyond scripts and use the product to understand where it might not make sense or how a small tweak could really improve your product. Understand that fixing three small usability issues could in fact be more valuable than your new shiny feature that may or may not be impactful for your customer.

AUTOMATION AS A BASELINE

Understand that the attributes of quality we can assess with tools do not make up the entire quality problem. They assist us in our journey but should not shut the door on human thinking about our customers and what we need to do to bring value to them.

Think of automation as a quality baseline mechanism. Something you can rely on to assert that the key pieces are working as expected. Once this baseline is established you can then think and explore your software. This exploring will uncover key findings that will help your product move beyond simply functional.

Don’t abuse patterns. For example, we have the test automation pyramid which explains that we should have most of our tests at the unit level then service level and then the UI level. From a technical side this makes a lot of sense however the customer ultimately interacts with the UI level when using our software. If we follow this pyramid as an overarching strategy we may be missing asserting how the product solves the customers problem at the most important layer – the layer they interact with.

REDUCE DISTANCE FROM THE CUSTOMER

The further you are from your customer the harder it is to understand what matters to them. A possible fix for this would be to schedule Bug Bashes. Bring as many team members from all layers and business functions as you can. When issues are uncovered during this bash you can understand as a group how your software works and where customers maybe encountering problems.

Meeting as a large group to discuss your product is so vital to success. When you do meet everyone should bring their voice and their angle to the discussion.

CUSTOMER FEEDBACK

If you do not have a channel which allows your customer to directly communicate with your development teams you are completely blind operating in a black box. You are basically in a storm and no North Star can be seen. Information shared by your customer is pure gold. It’s so special and amazing that your customers would be willing to share what pains them and is troubling their experiences. If you don’t have a feedback mechanism, stop everything and create one now.

After that the next challenge is getting your team members to read this feedback every day. You need this customer feedback to understand the difficulties and frustrations they are experiencing. With a better understanding of the customer’s problems you can make better decisions about fixes and what to build next.

BE YOUR OWN COMPANY

Don’t follow other companies test strategies. They have a different product with different customers. Understand there may not be a single gold solution to software testing. Grow and evolve your testing strategy over time, don’t lock in your strategy on day one.

Several large successful companies have very different test strategies. Why would we blindly follow these strategies? Review them but iterate on your own strategy. Measure what is working and delighting your customers. Lead don’t follow.

PEOPLE

Don’t hire people with the same mindsets and backgrounds. If you do, it could result in a very narrow view of software testing. A view which treats testing as a checkbox item which needs to be done but only just enough. We want to prioritize testing to ensure we are delivering top value to our customers.

Hire a dedicated tester for your team. I know agile does not mention having dedicated testers, but why should we blindly follow a process that negatively impacts our products and customers. Testers are inherently good at breaking software apart, exposing risk and guiding your developers. The way they think and expose problems with ease is a skill. At the same time, if you have testers ensure they are positioned where they will have the most impact. For example, don’t apply the same tester to dev ratio on every team you have. Position these testers in a layer that has the potential for the most problems. Doing so, could help guide your teams on what needs to be tested and how.

Finally, remember when hiring there is no substitute for passion. Passion trumps most other skills as most skills can be learned – passion cannot.


We all say we are customer first, hell at times we put this text into our mission statements but do our behaviors match up with this?

Great testing can help you deliver more value to your customer. If you have a test role on your team today make it a special role. Let this person not only expose risk but also guide your teams in making small tweaks which will enhance your software’s quality.

I really hope you’ve enjoyed the information I have gathered in this article. I have had these thoughts running around in my head for a long time but had trouble bundling them all up. In writing this article I have gained a clearer picture of what needs to be done to bring value to our customers through test.

We all say we are customer first but let’s actually be customer first!

Posted in  All, Business of Testing, Planning for Quality, Test Planning & Strategy | Tagged , , , , , , , | Leave a comment

Risk-driven Testing vs. Risk-based Testing – It’s the Thought that Counts

I introduce my typical philosophy to planning/organizing the testing effort for a project, in part, as one using a risk-driven testing approach.  In a recent conversation with a client, they said they followed a risk-based approach to their testing, and was I referring to something different?

My initial reaction was to say no; they are pretty much the same thing.  But then, in the cautionary spirit of asking what one means when they say they “do Agile”, I asked them to outline their risk-based approach and describe how it informed their testing effort.

In the following conversation, it became clear that we had different flavours of meaning behind our words.  But, were the differences important or were we talking about Fancy Chickens?

Fancy Chickens

Image source: Silkie by Camille Gillet and Sebright by Latropox [CC BY-SA 4.0]

I have spent years talking about risk-based testing (ack!).  At some point, I began referring more often to risk-driven testing but continued to use the two terms interchangeably much of the time.  However, I have always preferred “risk-driven testing”.  At first, it was mostly from a phrasing point of view; “risk-based” sounds more passive than “risk-driven” because of the energy or call to act that the word “driven” implies.  But at the same time, risk-driven always helped me think more strategically.

From an implementation point of view, I was thinking: risk-based testing means here are the relatively risky areas, we should spend time checking them out, in a “breadth and depth” prioritized, time-permitting manner, to make sure things are working properly.  Whereas risk-driven testing investigates questions like: “is this really a risk?”, “how/why is it a risk?”, “how can the risk becomes real?”, etc.  Risk-driven testing is not just about running a test scenario or a set of test cases to check that things are (still) working, it is about supporting risk management, up to and including running tests against the built system.  And so, to me, risk-driven included all that was risk-based and more.

I don’t like to be “that guy” that introduces yet another “test type” or buzzword into conversations with clients or team members.  But, I do like to make distinctions around the value different approaches to testing can provide (or not) and sometimes a label can be very helpful in reaching understanding/agreement… including with yourself.

This recent conversation got me thinking a little deeper (once again) about how others use the term risk-based testing and the differences I think about when considering risk-driven testing.  Sometimes it even feels like they are two different things… Could I formally differentiate the two terms: risk-driven and risk-based?  To succeed, there should be value in the result.  In other words, the definitions of risk-driven testing and risk-based testing should, by being differentiated from each other, provide some distinct added-value to the test strategy/effort when one was selected and the other was not, or when both were included together.

In the spirit of #WorkingOutLoud, I thought I would take this on as a thought exercise and share what follows.

The question to be considered: “Can a specific, meaningfully differentiated (aka valuable), clearly intentioned definition be put behind each of ‘risk-driven testing’ and ‘risk-based testing’ so as to better support us in assembling our test strategy/approach and executing our testing?”

Defining Risk-Driven Testing and Risk-Based Testing

To work towards this goal, I will first define risk and risk management and then consider the meanings implied by “risk-driven” vs. “risk-based” in the context of testing.

What is a Risk?

Risk in software can be defined as the combination of the Likelihood of a problem occurring and the Impact of the problem if it were to occur, where a “problem” is any outcome that may seriously threaten the short or long term success of a (software) project, product, or business.

Risk Clustering – Project-Killing Risks of Doom

The major classes of risks would include Business Risks, Project Risks, and Technical Risks.

Managing Risk Involves?

Risk Management within a project typically includes the following major activities:

  • Identification: Listing potential direct and indirect risks
  • Analysis: Judging the Likelihood and Impact of a risk
  • Mitigation: Defining strategies to avoid/transfer, minimize/control, or accept/defer the risk
  • Monitoring: Updating the risk record and triggering contingencies as things change and/or more is learned

Risk Mitigation – Scarcity Leads to Risk-Driven Choices

A risk-value calculated from quantifying Likelihood and Impact can be used to prioritize mitigation (and testing where applicable).

Better to be Risk-based or Risk-driven?

Here I will brainstorm on a number of points and compare risk-based and risk-driven side-by-side for each.

  Risk-based Risk-driven
Restated “using risk as a basis for testing” “using risk to drive testing”
Word association Based on risk
Founded on risk
Pool of risks
Driven by risk
Focused by risk
Specific risks
Risk values Used to prioritize functions or functional areas for testing Used to prioritize risks for testing
At the core Effort-centric
Confidence-building to inform decision-making
About scheduling/assigning testing effort to assess risky functional areas in a prioritized manner
Investigation-centric
Experiment-oriented to inform decision-making
About the specific tests needed to analyze individual risks or sets of closely-coupled risks in a prioritized manner
Objective Testing will check/confirm that functionality is (still) working in the risky area Testing will examine/analyze functionality for vulnerability pertaining to a given risk
Role of risk Don’t need to know the detailed why’s and wherefore’s of the risks in order to be able to test efficiently Need to know all the “behind-the-scenes” of the risks in order to be able to test effectively
Primary activity Test Management Test Design
Input to Release Management Risk Management

At this point, my next thought is that both Release Management and Risk Management are parts of Project Management, therefore Risk-based Testing and Risk-driven Testing are both inputs to informing project management and the stakeholders that project management serves.  And although it seems that I am identifying how the meaning behind the two terms could diverge, ie:

  • Pool of risks vs. specific risks
  • Effort management vs. technical investigation,
  • Testing functionality vs. testing risks,
  • Efficiency vs. effectiveness,
  • Planning vs. design,
  • etc;

… I am wondering if I can approach this exercise from a truly non-biased perspective – it is feeling like I am trying a bit too hard to get Risk-driven Testing, as a term, to be different (and more important) than Risk-based Testing rather than simply identifying a natural divergence in meaning.  Am I just creating a new kind of Fancy Chicken?

Familiar Thoughts

My attempt to separate these two terms reminds me of Michael Bolton and James Bach making their distinction between Testing and Checking for their Rapid Software Testing (RST) training course.

In “Testing vs. Checking“, Michael Bolton summarizes their view, in part, by stating:

  • Checking is focused on making sure that the program doesn’t fail. Checking is all about asking and answering the question “Does this assertion pass or fail?”
  • Testing is focused on “learning sufficiently everything that matters about how the program works and about how it might not.Testing is about asking and answering the question “Is there a problem here?”

He goes on to also state:

  • Testing is something that we do with the motivation of finding new information.  Testing is a process of exploration, discovery, investigation, and learning.

This, and their other related articles, led me to doodle the following comparison of how Risk-driven Testing and Risk-based Testing might align with different parts of the RST thinking:

Risk-driven Testing and RST

* per the RST namespace circa 2009

The article quoted above is from 2009 and in following writings both Michael Bolton and James Bach have evolved their position and definitions.  For example: James Bach has declared the use of “sapience” to be problematic and has introduced “human checking” vs. “machine checking”.  You can read about that in “Testing and Checking Refined” and also see that Testing has now been (re)elevated to include these refined definitions of Checking.  And in “You Are Not Checking“, Michael Bolton has tried to clarify that humans aren’t (likely) Checking even if they are…checking?  They are clearly continuing to adapt their wording as they gain further feedback on their constructs.  But, it also feels like wordplay: someone(s) didn’t like a word, so another is being put in its place and more discussion/argument follows and it often feels like that discussion ends up being more about the word(s) being used rather than about the usefulness of the concept they are trying to advance.

And here I am falling into the same trap.  But even worse, I am trying to use the words “based” and “driven” (the third most important words out of three in each term) to make a division in something that is otherwise fundamentally the same thing (considering the other two words in each term are “risk” and “testing”); namely an approach to testing that uses risk to guide its planning, preparation, and execution – we are not talking about a new/different method/technique of testing, but an over-arching approach to testing.  Instead of strengthening the whole, I feel like I am trying to pull the two terms apart just to be able to give Risk-driven Testing its own identity; It’s own lesser identity, ultimately, as it would need to leave some aspects behind with Risk-based Testing.

Risk-Driven Testing: Interactions

I am trying to force an “A”, “B”, or “C” when it is already a “E”, from a practical point of view.  (If there was a third part to consider, then perhaps there would be a stronger case for differentiation and then maybe a need to name the whole as a new approach to testing!)

But before turning away from trying to separate the two terms, let’s see if anyone else is already trying to make a distinction between Risk-driven Testing and Risk-based Testing.  Maybe someone has come up with a good angle…

Reinventing the Wheel, or the Mousetrap?

Apparently not.  One of the difficulties or challenges in using Risk-based Testing or Risk-driven Testing is that these terms are both being used already and neither seems to be being used consistently or well.  But I couldn’t find any uses where Risk-based Testing and Risk-driven Testing were being set at odds with each other – the opposite in fact; they are used interchangeably, or even in the same breath/line.  But Risk-based Testing was the more common term used.  Let’s look at some of those definitions…

Most descriptions of Risk-based Testing are vague/broad or touch primarily on prioritizing the scope of the test effort (and test cases) like:

“Risk-based testing (RBT) is a type of software testing that functions as an organizational principle used to prioritize the tests of features and functions in software, based on the risk of failure, the function of their importance and likelihood or impact of failure.”

https://en.wikipedia.org/wiki/Risk-based_testing

This one sounds like how I was describing Risk-based Testing above.  But it is the “cart-before-the-horse” usages of that definition I found, where the interpretation is to put focus on descoping tests once the schedule gets tight, or just talking about using risk to (de)select which test cases to run, that are discouraging and reinforce my desire to turn to using a “new” term, one that doesn’t feel so… tarnished.

We always have limited time, limited resources, and limited information and there is never going to be a project that is risk-free/bug-free.  So we always have to be smart about what we do and find the cheapest/fastest effective approach to testing.  Risk-based Testing can be part of the answer if one employs it as it is meant to be.

In “Heuristic Risk-Based Testing“, James Bach (1999) states that testing should follow the risks (akin to how I described Risk-driven Testing above):

  1. Make a prioritized list of risks.
  2. Perform testing that explores each risk.
  3. As risks evaporate and new ones emerge, adjust your test effort to stay focused on the current crop.

And in “Risk-Based Testing: Some Basic Concepts“, by Cem Kaner (2008), the organizational benefits as well as the benefits of driving testing from risks is discussed in detail based on the following three top-level bullets:

  • Risk-based test prioritization: Evaluate each area of a product and allocate time/money according to perceived risk.
  • Risk-based test lobbying: Use information about risk to justify requests for more time/staff.
  • Risk-based test design: A program is a collection of opportunities for things to go wrong. For each way that you can imagine the program failing, design tests to determine whether the program actually will fail in that way. The most powerful tests are the ones that maximize a program’s opportunity to fail.

This last definition seems to combine my outlines of both Risk-Based Testing and Risk-driven Testing plus a bit more.  I had not remembered to include the risk-based test lobbying ideas in the above table, though I certainly do this in practice.  Looks like a winner of a definition.

It Can Mean a Wor(l)d of Difference

By changing one word, we can change the underlying meaning that is understood by others, for better or for worse.  Will we cause understanding and clarity?  Will we cause disagreement or confusion?  Who does it help?  Will it be just a word of difference or a whole world of difference?

I have been previously familiar with these definitions from Cem Kaner and James Bach and this exercise has served to remind me of them.  If I find/found these Risk-based Testing definitions suitable, then why do I like to use Risk-driven Testing?

As mentioned at the beginning, maybe it is the energy, the “marketing” oomph!  Maybe it was to feel alignment with Test-driven Development (TDD), Behaviour-driven Development (BDD), and other xDD’s.  But because I often see that Risk-based Testing is weakly implemented, I think mostly I use it to better remind myself, and my teams, that we are supposed to be using risk as the impetus for our test activities.

This feels like the crux of my real issue: using risk.  Where is the risk identification and risk analysis on most projects?  Sure, many PMs will have a list of risks for the project from a management point of view.  But where is the risk identification and analysis activities that testing can be a part of; so we can better (1) learn about and investigate the system architecture and design, so we can better (2) exercise its functional and non-functional capabilities and search for vulnerabilities, so we can better (3) comment on how risky one thing or another might be to the project, the product, and/or the business.

Maybe I should change the question to something like: “Are we performing Risk-based Testing if we don’t have a list of risks with which testing can engage?”

In my experience, the lack of a formally integrated Risk Management process is quite common for projects, large and small.  In the face of that lack – the lack of risk-related process and artifacts, the lack of risk-related information – can testing be said to be risk-based?

Projects often prioritize testing without risk identification/analysis/mitigation/monitoring data as input.    Someone says this is “new”, “changed”, “complex”, or “critical”.  Common sense then tells you that there is some risk here.  Therefore, testing should put in some effort to check it out.  But without Risk Management, how do you know how much and with what priority?  These are questions that can’t be answered in a calculated/measured manner without Risk Management.  They can only be answered relatively speaking; this one is big, this one is small – test the big one until you are confident it is fine and/or we run out of time.

It seems that this is a prioritization approach where there are consensus-driven priorities based on some guiding ideas of what has changed and what has had problems in the past which could make things risky.  We are then forced to use these best guesses/inferences to plan our efforts.  Instead of Risk-based Testing, we could call it Commonsense Testing.

The point here is that when projects claim to use Risk-based Testing, many are not using an actual Risk Management process to identify, analyze, mitigate, and monitor a set of risks.  And so, the functionalities being tested are not tied to a specific individual risk or small set of related risks.  BUT, there is some thinking about risk – perhaps this can be considered the beginning; the beginning of Scalable Risk-Based Testing.

Risk-driven Testing: Scalable Risk-based Testing

When discussing the ability and need to scale the rigour of testing and which test planning and execution techniques can be employed, the role of risk will be highly dependent on how formally risks are identified and managed.  Using a scalable approach allows the project team to provide visibility as to what challenges may exist for the test team to be able to inform stakeholders about system quality – resulting in expectations being set on how the test team will be able to maximize their contribution to the project, given its constraints/context. [Ref: Scalable V-Model: An Illustrative Tool for Crafting a Test Approach]

This is a great opportunity for testing to advocate for Risk Management activities across the project (to the benefit of all) and to drive increasing use of Risk-based Test Prioritization, Risk-based Test Design, and Risk-based Test Lobbying.  [Ref: New Project? Shift-Left to Start Right!]

Conclusion

Well, that was a bit of a walkabout and it didn’t finish up where I guessed it would, but thinking or working out loud can (should) be like that.  This was a good confrontation of “my common language” that I have to reconcile against when speaking with clients and teams and resulted in a reconnection with the source definitions that “drove” my thinking on this topic in the first place.

I was expecting to say something here about how Fancy Chickens all look the same when they have been barbequed, eg: when the project gets hot… but maybe the observation at this point is that fanciness (feathers) can often be a cover for the actual value, deceiving us (intentionally or not) into thinking we are getting more than we are (a chicken).

Going forward, I will be looking into more formally capturing details of what I will now refer to as my “Scalable Risk-based Testing” approach, and seeing how I can apply it to that client situation that prompted this whole exercise.

On your own projects, why not think about how you use risk to guide the testing effort and, vice-versa, how you can use testing to help manage (ID/Assess/Mitigate/Monitor) individual risks.

In the meantime, regardless of the labels you give your test approach and project processes: get involved early, investigate and learn, and do “good” for your project/team.

And remember…

It’s the thought that counts – the thought you put into all your test activities.

 

Posted in  All, Risk & Testing, Test Planning & Strategy | Tagged , , , | Leave a comment

Testing Matters because Quality Matters

In the course of crafting my contribution for Alexandra McPeak‘s follow-up article for CrossBrowserTesting.com / SmartBear Software‘s #WhyTestingMatters thread, “Expert Insight: Why Testing Matters“, I wrote the following article.  Check out Alex’s first article, “Why Testing Matters“, as well for some current examples of quality challenges in the public eye.

There are so many attributes/factors that contribute to a software system or product being “of quality” that typically you have only the resources to make a few stand out.  Those that are emphasized become competitive differentiators – and part of your brand.

Think of any industry.  What is the one word or phrase that describes each name brand in that market space?  Even if those words/phrases are not directly related to an attribute of quality; the lack of certain aspects of quality, competitively speaking, would not be tolerable for long by the brand’s reputation.

But, each of these companies must constantly make trade-offs and compromises in the fight to grow and maintain their market share.  Faster and cheaper are continually at odds with quality; clamouring for sacrifices and shortcuts.  Competition demands it.

Brands can take decades to work on their images, building up their reputations, and one poor decision that leads to unsatisfied customers and bad publicity can potentially lose it all – at least for a time.

“The bitterness of poor quality remains long after
the sweetness of low price is forgotten.”

So, how can your organization walk the precarious tight-rope of minimizing time-to-market and maximizing profits while delivering products that are still “good enough” for maintaining your image/reputation?

Testing

  • Testing can serve as a trusted advisor and integrated investigator of quality within the organization.
  • Testing can strengthen the focus on each prioritized facet of quality across every phase of each project.
  • Testing can evaluate whether the ‘quality bar‘ required for each phase/release has been achieved.
  • Testing can transform collected data into consumable information to help stakeholders make informed business decisions around quality – like when it is reasonable to release, or not.

You wouldn’t want your brand to become infamous for an unfortunate/faulty decision that could have been prevented by leveraging smarter testing, would you?

Testing matters because it provides critical information needed by your organization and your brand to make insightful business decisions related to your software product or system on the road to quality success.

In other words: Testing matters because quality matters.

 

For related reading:

 

Posted in  All, Business of Testing, Planning for Quality | Tagged , , , , , , , | Comments Off on Testing Matters because Quality Matters

Confidence’s Role in Software Testing

Cofindence's Role in Software Testing

Confidence – “the feeling or belief that one can rely on someone or something; firm trust.” https://en.oxforddictionaries.com/definition/us/confidence

A few weeks ago I sat down to write about verifying bug fixes. I wanted to determine if there was a systematic approach we could follow when performing this activity. When exploring this approach I quickly realized confidence’s crucial role in verifying or signing off on any software we test.

Confidence dictates how much testing we feel we need to execute before we can sign off on anything we test. Our current confidence in our development team directly impacts how much test time we will take in order to feel our software is ready for sign off. The historical quality coming out of the development team dictates this level of confidence.

High Confidence – Just the right amount of testing is executed ensuring software can be signed off. (Note: This does not apply to mission critical software systems.)

Low Confidence – Based on historically bad code quality testers may over test even when code quality is good.

I believe this confidence level is very impactful to the speed in which we develop software. We might hear “QA is a bottleneck” but this is potentially due to historically low quality code causing testers to over test even when good quality code is being verified.

To illustrate this point further see the approach below I came up with to test and ultimately verify bug fixes.

Example: A Mobile App Which Requires Users to Login

Imagine we have a mobile app which requires users to login.

The fictitious bug we will be verifying is the following:

Title: Login Screen – App crashes after tapping login button.

Preconditions:

  • App is freshly installed.

Steps to Reproduce:

  1. Launch the app and then proceed to the login screen.
  2. Enter a valid existing email and password.
  3. Tap the “Login” button.

Result:

  • App crashes.

Before Verification Begins

Once a bug is marked fixed it’s important we gain more understanding about it before starting to verify its fix. To do this we ask the following questions of the developer who implemented the fix:

  • What was the underlying issue?
  • What caused this issue?
  • How was the issue fixed?
  • What other areas of the software could be impacted with this change?
  • What file was changed?
  • How confident is the developer in the fix? Do they seem certain? Even this can somewhat impact how we test.

* Special Note: Remember we need to gain context from a developer but as a tester you’re not taking direction on exactly what to verify. This is your role as a tester. Of course if a developer suggests testing something in a certain way you can but it’s your role as an experienced tester to use your mind to test a fix.

Now that we have gained a full understanding of how the bug was fixed let us start by verifying at the primary fault point (Exact steps listed in the original bug write up). Below are the high level verification/test ideas starting from very specific checks working outwards like layers of an onion. Notice as we execute more tests and move away from the primary fault point our confidence level in the fix is increasing.

Test Pass 1

  • Exact Software State: Follow exact “Preconditions”. In this case “App is freshly installed”.
  • Exact Input: Following exact steps listed in bugs “Steps to Reproduce”.
  • Verify app no longer crashes.
  • We could stop here but we would not have full confidence that the bug is fully fixed and that we haven’t introduced new knock-on bugs.

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 2

  • Varied State: App is not freshly installed but user is logged out.
  • Exact Input: Following exact steps listed in bugs “Steps to Reproduce”
  • Verify app does not crash

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 3

  • Varying State – After logging out/After restarting app and clearing app data.
  • Varying Input – Missing credentials/Invalid credentials
  • Verify no unexpected behavior

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 4

Test features/functions around login such as:

  • Forgot Password
  • Sign Up

Moving another layer away from the fault: Our confidence in the fix is increasing

Test Pass 5

Moving one final layer away from this fix we enter a phase of testing which includes more outside the box type tests such as: (Note: I love this type of testing as it’s very creative)

  • Interruption testing – Placing app into the background directly after tapping the login button.
  • Network fluctuations – Altering connection while login is taking place.
  • Timing issues – Running around interacting with UI elements at an unnatural speed. Example – Rapidly tapping the login button then back button then login button.

At this point our historic confidence plays a role in whether we continue to test or we feel the bug is fixed. If QA’s confidence is low we could end up spending too much time testing in this final test pass with little to show for our efforts.

How is Confidence Lowered?

  • Initial code quality signed of by development is low. As testers when we begin testing a fix which has been signed off as ready for testing, we will often gauge its quality based on how quickly we discover a bug which will need fixing.
  • Repeated low quality deliveries out of development can make testers correctly over test because it’s necessary. If bugs are found routinely very quickly in software we test naturally we are skittish in signing off future high quality work.

This can lead to over testing even when code quality is delivered in a high quality state. This over testing won’t provide anything of value. Don’t get me wrong you will find bugs but they might end up being more nice to know about then must fix issues. All software releases have bugs. It’s our job to identify high value defects which threaten the quality of our solutions.

How Can We Boost Our Confidence?

I believe we can’t perform “just-right” testing unless our confidence in our development teams is reasonably high. We need to make sure baseline quality is established before any “just-right” manual testing can take place. How do we do this?

  1. Test automation is a perfect mechanism to establish a quality baseline. “Checking” to ensure all basic functions are working as expected.
  2. Shift left into the trench and work with developers as they are implementing a feature so you can ensure initial quality out of development is higher.
  3. Measure your testing efforts to ensure you’re not over testing. Learn to know that sweet spot of just enough testing.
  4. Expose low quality areas – Retrospectives are ideal places to bring up quality issues with the larger team. Let them know you don’t have confidence and need something to change to boost it back up.
  5. Slow down – Oh no we can’t do that right? Yes we can and should slow down if our confidence is low.

If you hear things like “QA is a bottleneck” in your organization you might want to look at the code quality historically coming out of your development team. It’s possible your QA groups are testing endlessly because they lack confidence in the work coming from the development team and they have to test further. It can be difficult for QA to shift or stop testing given the negative track record or low confidence in their teams.

If your code quality is poor, QA’s confidence in Development will be low, and then QA will always be a bottleneck.

Think about it 🙂

Posted in  All, Business of Testing, Planning for Quality | Tagged , , , , , , | Leave a comment

Better Test Reporting – Data-Driven Storytelling

Testers have a lot of project “health” data at their finger tips – data collected from others in order to perform testing and data generated from testing itself. And, sometimes test reporting gets stuck on simply communicating this data, these facts. But, if we simply report the facts without an accompanying story to give context and meaning, there is no insight – insight needed to make decisions.

Better Test Reporting - Data to Information to Insight

With all the data we have close to hand, testing is in a great position to integrate data-driven storytelling into the various mediums of our test reporting.

“Stories package information into a structure that is easily remembered which is important in many collaborative scenarios when an analyst is not the same person as the one who makes decisions, or simply needs to share information with peers.” – Jim Stikeleather, The Three Elements of Successful Data Visualizations

“No matter how impressive your analysis is, or how high-quality your data are, you’re not going to compel change unless the stakeholders for your work understand what you have done. That may require a visual story or a narrative one, but it does require a story.” – Tom Davenport, Why Data Storytelling Is So Important—And Why We’re So Bad At It

This enhanced reporting would better support the stakeholders with relevant, curated information that they need to make the decisions necessary for the success of the project, and the business as a whole.

Not Your Typical Test Report…Please!

When thinking of test reporting, perhaps we think of a weekly status report or of a real-time project dashboard?

Often, these types of reporting tend to emphasize tables of numbers and simple charts and rarely contain any contextual story. Eg: time to do the test report: let me run a few queries on the bug database and update a list/table/graph, or two.

We need to thoughtfully consider:

  • What information should our test reporting include?
  • What questions should it really be answering?
  • What message is it supposed to be delivering?

If we answered the following questions with just data, would we gain any real insights?

Question Data Provided
Is testing progressing as expected? # of test cases written
Do we have good quality? # of open bugs
Are we ready for release? # of test cases run

Obviously, these answers are far too limited, and that is the point. Any single fact, or collection of standalone facts, will be typically insufficient to let us reasonably make a decision that has the true success of the project at heart. [Ref: Metrics – Thinking In N-Dimensions]

To find connections and enable insights, first think about what audience(s) we could support with our data in terms of these broad core questions:

  • How are we doing? (Status)
  • What has gone wrong? (Issues)
  • What could go wrong? (Risks)
  • How can we improve?

Then we tailor our data-driven storytelling with a message for each audience to facilitate insight that will be specifically of value to them.

Test Reporting: Data vs. Information

An important distinction to make when thinking about increasing the value of test reporting is the difference between data and information:

  • Data: Data can be defined as a representation of facts, concepts or instructions in a formalized manner which should be suitable for communication, interpretation, or processing by human or electronic machine.
  • Information: Information is organised or classified data which has some meaningful values for the receiver. Information is the processed data on which decisions and actions are based.

Computer – Data and Information, Tutorials Point

Data is not information – yet. Data is the building blocks we construct information from. When we transform data, through analysis and interpretation, into information that we make consumable for the target audience, we are dramatically increasing the usefulness of that data.

For example:

“Here is real-time satellite imagery of cloud cover for our province…”
“Look at all those clouds coming!”
versus…
“This is a prediction that our city will get heavy snowfall starting at about 8:30pm tomorrow night…”
“We better go buy groceries and a snow shovel!”

Or in the case of testing:

“Here is a listing of all the bugs found by module with the date found and a link to the associated release notes…”
“That is a lot of bugs!”
versus…
“This analysis seems to show that each time Module Y was modified as part of a release the bug count tended to spike…”
“Let’s have someone look into that!”

Through consumable information, we can help provide the opportunity for insights, but information is not insight itself. The audience has to “see” the insight within the information. We can only try to present the information (via whatever mediums) in a way we hope will encourage these realizations, for ourselves and others.

From Data to Decision

Once data is analyzed for trends, for correlations with other data, etc.; plans, choices, and decisions can be made with this information.

The following illustrates the path data takes to informing decisions:

Better Test Reporting - Data Path to Decision-Making

Figure 1: Test Reporting Data Path to Decisions

What data we are collecting, and why, should be firmly thought out. And then, don’t just report the numbers. Look at each testing activity and see how it can generate information that is useful and practical as input to the decisions that need to be made throughout the project.

  1. Data: <we’ll come back to this>
  2. Consumable Information: Testing takes the collected data and analyzes it for trends, correlations, etc. and reports it in a consumable manner to the target audience(s).
  3. Proposed Options: The data-driven story provided is then used to produce recommendations, options, and/or next steps for consideration by stakeholders.
  4. Discuss & Challenge: The proposed options are circulated to the stakeholders and through review and discussion, plans can be challenged and negotiated.
  5. Feedback Loop: These discussions and challenges will likely lead to questions and the need for clarifications and additional context, which can then send the process back to the datastore.
  6. Decisions Made: Once agreements are reached and the plans have been finalized, decisions have been made.

Of course, testing is not the sole party involved in driving this process. Testing’s specific involvement could stop at any step. However, instead of always stopping at step one with 1-dimensional test reporting, testing could make use of the data collected to move further along the path and to tell a more meaning-filled multi-dimensional story to a more diverse audience of stakeholders, more often.

Better Data – Better Decisions

In this way, the function of test reporting can be helping the project much more than it would when just reporting “there are 7 severe bugs still open”.

This is because our choices typically are not binary. We do not decide:

  • Do we fix all the bugs we find?
  • Do we find bugs or prevent bugs?
  • Do we automate all the testing?
  • Do we write a unit test for everything?

We decide to what degree we will do an activity. We decide how much should we be investing into a given activity or practice or tool.

This is where the first item in the list just above, data, comes in. Data lets us find out what trade-offs with other project investments we will have to make to gain new benefits. Data is the raw material that leads to insight.

So, in order to have “better test reporting” we need to make sure that we know what we need insight about, collect the supporting data accordingly, report the data-driven story, and then follow the path to better decision-making.

Better Data
Better Information
Better Decisions

For related reading, check out these articles:

Posted in  All, Other, Planning for Quality, Test Planning & Strategy | Tagged , , , , , , , , | Leave a comment

The Calculus of Acceptance Testing

It’s tempting to believe that acceptance testing is the straight-forward process of comparing the as-built solution to the as-required solution. Any differences discovered during testing are highlighted in defect reports and ultimately resolved through a defect resolution process. You’re done when you’ve had the business test everything and when enough of the differences have gone away for the business to believe that they could use the solution as part of their regular business activities.

It’s tempting to believe this statement because it’s simple and it seems reasonable. Looking closely, however, it’s an over-simplification that relies on three assumptions that are difficult and complex to uphold.

First, the assumption that there is agreement on what the “as-required solution” really is. The various people involved may not share the same mental model of the business problem, the solution to that problem, or the implementation of that solution. There may be a document that represents one point of view, or a even a shared point of view, but even then it only represents a point in time – when the document was published. In a large enough management context where multiple workgroups interact with the solution on a daily basis, the needs and wants of the various parties may be in conflict. In addition, business and technical people will come and go on and off the project. This too leaves gaps in the collective understanding.

Second, the assumption that the business testers can test everything. Defining ‘everything’ is an art in itself. There are story maps, use case models, system integration diagrams, or even the Unified Modeling Language to help define ‘everything’. Comprehending ‘everything’ is a huge undertaking, even if it’s done wisely so that the essential models are identified organically and then grown, not drawn. Finding “just enough” testing to complete to get to the point of acceptance – by that standard – is a black art. It’s a calculus of people’s perspectives, beliefs and biases – because it’s people that accept the functionality – and of technical perspectives, beliefs, and biases – because there are technical elements of testing. Even acceptance testing.

Third, the assumption that the target of the test is the software. In reality the target of an acceptance testing is the intersection of software (multiple layers and/or participating components), business process (new, changed, and unchanged), and people’s adoption skills. To make this even more complex, consider that often the business testers are not the only users; sometimes they represent their workgroup with a wide range of technology adoption skills. So they’re not even testing at this critical section with solely their own needs in mind – they have to consider what other solution adopters might experience using the software.

For these and other reasons that I will explore in this blog, acceptance isn’t an event as much as a process, and acceptance testing isn’t about software quality as much as it is about solution adoptability. Of course those two things are related because you can’t address adoptability without quality. The core of the calculation is a gambit – spend the time assessing quality and hope for the best on adoptability, or spend the time exploring that intersection mentioned above – software, business process, and the likelihood of adoption.

That puts a different spin on the term “acceptance testing”. Instead of evaluating software against requirements, what we do in the last moments before go-live is test acceptance. Acceptance testing.

Posted in  All, Planning for Quality, Test Planning & Strategy | Tagged , , , | Leave a comment

Augmenting Testing in your Agile Team: A Success Story

One of the facts of life about Agile is that remote resources, when you have a mostly collocated team, generally end up feeling a little left out in the cold.  Yet, with appropriately leveraged tools, sufficient facilitation, management support and strong team buy-in, it can end up being a very successful arrangement.

Augmenting Testing in your Agile Team: A team with remote contributors

Figure 1: A team with remote contributors

There is an implementation model that lends itself more naturally to adding testing resources, or a testing team, to your delivery life cycle.  Rather than embedding your resources, you can find ways to work with the teams in parallel, augmenting their capabilities and efforts in order to achieve greater success.   In this article, we’ll look at a particular case where PQA Testing implemented an augmenting strategy to tackle regression and System Integration Testing (SIT).

Recently we were working with a company that delivers a complex product in retail management to assorted third party vendors.  Features were created, tested and marked ready for release by functionally targeted Agile teams.  Coming out of a sprint wasn’t the last step before a feature was released, however.  Due to the complexity of the product, environments, other systems controlled directly by the third party vendors and other systems controlled indirectly through their third party vendors, System Integration Test (SIT) cycles and User Acceptance Test (UAT) cycles were necessary.

The original intent, when our client went Agile, was to be able to continue to support these relationships through the Agile teams.  What soon became evident was that the amount of regression testing in the SIT environments required for the new features was overwhelming to the testing resources dedicated to a feature team.

Augmenting Testing in your Agile Team: A mixed team with internal and external resources

Figure 2: A mixed team with internal and external resources

Additionally, as multiple environments and numerous stakeholders from various vendors with their own environments were introduced, simple communication, coordination of environments and testing became much more complex and time consuming.  Defects that were found in SIT testing needed to be triaged and coordinated with the other issues created from other vendors, and then tracked as they moved their way through the different teams and vendors to their solution.

As the testing resources on each team focused more on their functional area, their knowledge became more and more specialized and they were no longer the “go-to” resource for questions that might span the entire domain. With this specialization, testers were no longer collecting as much domain knowledge. Additionally, while automation was an integrated part of the company’s solution, test automators were also embedded in the Agile teams.  This changed the focus of automation; it slowly drifted away from providing benefits at the end-to-end integration testing level.

When we began the engagement with this client, they were succeeding from release-to-release, but not at optimum levels of quality, or to vendor satisfaction.   They were borrowing resources from multiple Agile teams and sometimes breaking sprints to ensure that the release could get through the SIT cycle within the specified time frame.  As we do on every PQA Testing engagement, we began by learning the existing process, how the software worked, and about the entire domain.  Before long, we took over regression testing for the releases.  Our focus then became to make sure that the existing functionality remained stable and clean, and that the new features integrated into the system well.

The testing team is now a separate team that is semi-integrated with the existing teams.  We transition knowledge back and forth, but there is a distinction in responsibilities between new features and regression and SIT testing.   As we began taking over these testing responsibilities, we also began to take over communication and facilitation between the core vendor and our client for release and testing.  An automation resource is also able to work through the tests from the big-picture integration perspective, and is reducing the amount of manual testing that is necessary.  Increasing our documented domain knowledge is making it easier to scale the team as necessary during busy times and releases.

Augmenting Testing in your Agile Team: An internal team augmented with a remote team

Figure 3: An internal team augmented with a remote team

Taking over these requirements with a dedicated team has greatly improved the feedback coming from the vendors.  The Agile teams have more focus on their core deliverables.  Integrating remotely with the client’s teams has worked well because we don’t have to constantly interact face-to-face to show value in our work.  We are simply another team trying to move the ball forward for the company, just like everyone else.

Remote testing teams dedicated to ownership of specific testing functions can remove many of the obstacles of testing remotely in an Agile environment and, in this case, better ensure quality for the end user.

Posted in  All, Agile Testing, Business of Testing | Tagged , , , , , | Leave a comment

8 Test Automation Tips for Project Managers

8 Test Automation TipsSoftware testing has always faced large volumes of work and short timeframes. To get the most value for your testing dollars, test automation is typically a critical component. However, many teams have attempted to add test automation to their projects with mixed results.

To help increase the likelihood of success, the approach to automation must be from the practical perspective that automating testing, effectively, is not easy.

Here are 8 test automation tips for project managers.

1. Decide Your Test Automation Objectives Early

Automation is a method of testing, not a type. Therefore automation should be applied to those tests from the overall test plan where there is a clear benefit to do so. Before starting, ensure that the benefits of test automation match with your objectives. For example, do you want to:

  • Discover defects earlier?
  • Increase test availability (rapid and unattended)?
  • Extend test capability and coverage?
  • Free-up manual testers?

2. Carefully Select your Test Automation Tools / Languages

There are many options and possible combinations of tools and scripting languages. Take some time to review the options and find the best fit for your project: confirm the technology fits with your project, look for a skill requirement match with your team, check that you can integrate with your test management and defect tracking tools, etc. Then try before you buy, eg: perform a proof of concept, perhaps using your smoke tests.

3. Control Scope and Manage Expectations

When starting a new test automation effort, there is often the tendency to jump in and immediately start automating test cases. To avoid this pitfall, it is important to treat the automation effort as a real project in and of itself.

  • Derive requirements from the objectives
  • Ensure the scope is achievable
  • Define an implementation plan (linked to milestones of the actual project)
  • Secure resources and infrastructure
  • Track it

Not only will this help ensure the success of the effort, but it will allow you to communicate with other stakeholders what will be automated, how long it will take, and the short and long-term benefits that are expected.

4. Use an Agile Approach

Following an Agile approach, you can roll-out your test automation rapidly in useful pieces; making progress visible and benefits accessible as early as possible. This will give you the ability to validate your approaches while demonstrating the value of the test automation in a tight feedback cycle.

5. Scripts are Software

You are writing code. The same good practices that you follow on the actual project should be followed here: coding standards, version control, modular data-driven architecture, error handling and recovery, etc. And, like any other code, it needs to be reviewed and tested.

6. Use Well Designed Test Cases and Test Data

Garbage in, garbage out. Make sure you have a set of test cases that have been carefully selected to best address your objectives. It is important to design these test cases using reusable modules or building-blocks that can be leveraged across the various scenarios. Additionally, these test cases should be documented in a standardized way to make them easier to add to the automated test suite. This is especially important if you envision using non-technical testers or business users to add tests to the repository, using a keyword driven or similar approach to your automation.

7. Get the Test Results

Providing test results and defect reports quickly is the most important reason for test automation. Each time you need to run the automated tests, you are reaping the benefits that automation provides. For example, running the test automation in its own environment as part of the continuous integration process will detect any issues related to the automated test cases for the application under test as soon as features and fixes are checked in.

8. Maintain and Enhance

Investing in automation requires a significant commitment in the short-term and the long-term for there to be maximum success. For as long as the product that is being automated is maintained and enhanced, the automation suite should be similarly maintained and enhanced. If the test automation solution is well-designed and kept up-to-date with a set of useful tests, it will provide value for years.

Posted in  All, Automation & Tools, Planning for Quality | Tagged , , , , , , , , , | Leave a comment

Software Testing Guiding Principles

All effective test teams typically have well defined processes, appropriate tools and resources with a variety of skills. However, teams cannot be successful if they place 100% dependency on the documented processes, as doing so leads to conflicts. Especially when testers use these processes as ‘shields’ or ‘crutches’.

Software Testing Guiding PrinciplesTo be successful, test teams need to leverage their processes as tools towards becoming “IT” teams. And by “IT” I do not mean Internet Technology.

IT (Intelligent Testing) teams apply guiding
principles to ensure that the most cost effective
test solution is provided at all times

This posting provides a look into the “guiding principles” I’ve found useful at helping testers I’ve worked with to become highly effective and valued as part of a product development organization.

Attitude is Everything

The success you experience as a tester depends 100% on your attitude.

A non-collaborative attitude will lead to
conflict, limit the success of the test team and
ultimately undermine the success of the
entire organization.

Testers must:

  • Learn to recognize challenges being faced by the team and to work collaboratively to solve problems
  • As stated by Steve Covey – “Think Win-Win
  • Lead by example and inspire others. A collaborative attitude will pay dividends and improve the working relationship for the entire organization, especially when the team is stressed and under pressure.

Quality is Job # 1

This one borrowed from Ford Motor Company.

Testing, also known as Quality Control, exists to implement an organizations Quality Assurance Program. As such, testers are seen as the “last line of defense” and play a vital role in the success of the business.

Poor quality leads to unhappy customers and eventually the loss of those customers, which then adversely impacts business revenue.

Testers are ultimately focused on ensuring the
positive experience of the customer using the
product or service.

Communication is King

Testers should strive to be superior communicators, as ineffective communications leads to confusion and reflects poorly on the entire team.

The test team will be judged by the quality of their work, which comes in the form of:

  • Test Plans
  • Test Cases
  • Defect Reports
  • Status Reports
  • Emails
  • Presentations

Learn how to communicate clearly, concisely
and completely.

Know Your Customer

Like it, or not, testing is ‘service based’ and delivers services related to the organizations Quality Assurance Program. For example: test planning, preparation and execution services on behalf of an R&D team (i.e. internal customer).

Understanding the needs and priorities of the
internal customer will help to ensure a positive
and successful test engagement.

Test Engineering also represents the external customer (i.e. user of the product / service being developed). Understanding the external customer will help to improve the quality of the testing and, ultimately, quality of the product.

Without understanding the external customer
it is not possible to effectively plan and implement
a cost effective testing program.

Ambiguity is Our Enemy

This basically means “Never Assume” and clarify whenever there is uncertainty.

Making assumptions about how a products features / functionality, schedules, etc function will lead to a variety of issues:

  • Missed expectations
  • Test escapes – Customer Reported Defects
  • Reflect poorly on the professionalism of the Test Engineering team

Testers must avoid ambiguity in the documentation that they create so as to not confuse others.

Data! Data! Data!

Test teams ‘live and breath’ data. They consume data and they create data.

Data provided from other teams is used to make intelligent decisions:

  • Requirements
  • Specifications
  • Schemas
  • Schedules
  • Etc

Data generated by the test program is used to assist with making decisions on the quality of the product:

  • Requirements coverage
  • Testing progress
  • Defect status
  • Defect arrival / closure rates

The fidelity and timeliness of the data collected
is critical to the success of the entire
organization.

Trust Facts – Question Assumptions

Related to principle having to do with avoiding ambiguity, test teams must never make assumptions. As doing so can have a significant impact on the entire business.

Testers must:

  • Work with the cross-functional team to address issues with requirements, user stories, etc
  • Clarify schedules / expectations when in doubt
  • Leverage test documentation (e.g. Test Plan) to articulate and set expectations with respect to the test program
  • Track / manage outstanding issues until they are resolved

Be as ‘surgical’ as necessary to ensure quality
issues are not propagated to later phases of
the product life-cycle

Think Innovation

Regardless of the role you play, every member of the test team can make a difference.

  • Improvement ideas should be socialized, shared and investigated
  • Small changes can make a huge difference to the team and the organization

Innovation that can benefit the Test or Quality Assurance Program are always welcome.

  • Tweaks to processes, templates, workflows
  • Enhancements to tools
  • Advancements in automation techniques, tools, etc

Remember, the team is always looking for ways to increase effectiveness and make the most out of the limited Test Engineering budget

Strive to be “Solution Oriented”

Process for Structure – Not Restrictions

Some will say “What do you mean process do not restrict”. On the surface it may appear as if process does in fact restrict the team; however, if you dig deeper you will discover that documented processes help by:

  • Improving communications through establishing consistency between deliverables and interactions between teams
  • Making it clear to all ‘stakeholders’ what to expect at any given point of time in the product life-cycle
  • Providing tools that can be used to train new members of the team

Documented process are not intended to limit
creativity. If the process is not working –
Change the Process

  • Augment existing templates if it will enhance the value of the testing program; however, be sure to follow appropriate Change Management processes when introducing an update that may impact large numbers of people.
  • Document and obtain approvals for deviations/exceptions if the value of completing certain aspects of the process has been assessed as non-essential for a program / project.

Plan Wisely

A well thought out and documented plan is worth its weight in gold. The documented plan is the primary tool used to set expectations by all the stakeholders.

“If you fail to plan you plan to fail”

Plan as if the money you are spending is your own. There is a limited budget for testing and it is your responsibility to ensure the effectiveness of the Test Program such that is provides the highest ROI (Return on Investment).

Identify Priorities

Make “First Things First” (Steven Covey)

Unless you are absolutely clear on the the priorities it will not be possible to effectively plan and / or execute a successful Test Program.

It is not possible for an individual, or team, to have two number one priorities.  Although it is possible to make progress on multiple initiatives it is not possible for an individual to complete multiple initiatives at the exact same time. Schedules, milestones, capacity plans, etc should all reflect the priorities.

Always ensure priorities are in alignment with
the expectations of all stakeholders

At the end of the day the most important Software Test Principle is “If you do not know – ASK”. Testers are expected to ask questions until they are confident that they have the information needed to effectively plan, prepare and execute an effective Test Program.

Just remember, unanswered questions contribute to ambiguity and add risk to the business.

Posted in  All, Business of Testing, Planning for Quality | Tagged , , , , , , | Leave a comment