Quality – Epiphany of Value, Purpose, and Function

The article by Brooks, referenced in View from the Q, uses terms like conversion and epiphany to describe the recognition and adoption of quality.  Deming is compared to the famous evangelist, Billy Graham.

My own conversion was at the feet of the Billy Graham of quality, Dr. W Edwards Deming. I had the great good fortune to attend six of his four-day seminars during the final years of his life, and even had some brief conversations with him. My conversion was literally an epiphany.

In this context, an epiphany can also refer to secular discoveries (i.e. Pythagoras and the 47th problem of Euclid (sum of squares in sides of right-angled triangle = square of hypotenuse a.k.a 3-4-5 unit right-angled triangle) or Archimedes (water displacement in the bathtub to measure volume, to which he cried “Eureka!”).

A Quality Epiphany or “Eureka Moment” can be generated consistently in three ways:

  • Demonstrating cost reductions by controlling losses and penalties by meeting and complying with requirements, regulations, and customer specifications.
  • Projecting revenue expansions by increasing business opportunities and entry to new markets demanding higher levels of quality assurance and performance.
  • Realizing improved operational efficiency and predictability by optimizing methods and practices to reduce waste, improve predictability, and increase capacity with existing resources.

How do we explain this in a simple and memorable manner?  I have an example below:

In response to the latest View from the Q posting, I wanted to provide a very simple and timeless explanation of Quality, from which more detailed explanations can be made to fully understand our profession.

Q Point CycleTo summarize:

  • The PDCA (Plan, Do, Check, Act) cycle is the core of all activities related to Quality, with the Quality function at the center point of all activities.
  • The scope of the PDCA cycle (represented by the area within the circle) reflects the domain of the Quality function, and its influence within its parent organization
  • The repetitive and continuous PDCA cycle (originating from Shewhart and popularized by Deming) reinforces the constant and perpetual progression from uncontrolled chaos to predictable ideals.
  • The two parallel lines bordering the circle have different meanings.  The left line represents the minimum levels of compliance, conformity, and “good enough” quality.  The right line represents the desired state of having quality-driven breakthroughs, innovations, and extensions to the solution.
  • The role and value of the Quality function can be shown visually in this context, as a way to continually perform PDCA-related actions to not only guarantee compliance and conformity, but to advance the organization toward competitive advantages.
  • Imagine that the PDCA cycle is not only in continual motion, but is progressing along a maturity track from the boundary of minimal compliance toward the infinite possibilities of unlimited innovation and extensions.  Consider the progression of Honda over the last 50 years from scooters and outboard motors to vehicles and aircraft.

The best way to take this from theory to practice is to demonstrate quality in all of its forms (Lean, Inspection, Process Controls, Management Systems, Risk Management, Business Process Improvement etc.) and show where value and gains are added for the organization and the people involved.

Posted in  All, Planning for Quality | Tagged , | Comments Off on Quality – Epiphany of Value, Purpose, and Function

Let’s Start a Test Revolution

Before I started my career as a tester, I was a scientist doing research in particle physics, and my years in science trained me to be skeptical, inquisitive and open to new approaches. It was the idea of continuous learning and exploration, always working toward the next discovery that attracted me to science in the first place.

I’ve found many parallels between science and testing, but lately I’ve started feeling that maybe we’re getting stagnant. I go to conferences to be inspired and get new ideas, but it’s starting to feel like I’m hearing the same things over and over again. What are the new ideas and trends in testing? What’s a new approach or technique that you’ve tried lately?

I think there is currently a lack of testing innovation, and testing as a profession is not evolving as fast as I would like it to. In this article, I’ll try to frame the problem as I see it, but I’m not going to give you answers or solutions – my goal is actually to raise questions. And hopefully, convince you to help me start a test revolution.

Stating the Problem

We live in a world where Moore’s law1 describes an incredible development of hardware capabilities, but where software also keeps getting more and more complex, and tools and methods are continuously evolving. From a software development perspective the world seems to just keep spinning faster and faster, and in more and more intricate patterns. Is testing really keeping up with the advances of development? Are our testing approaches evolving as quickly as the new technologies, or is testing being left behind, using the same methods and techniques we were using a decade ago?

Thinking Inside the Box

How do you usually approach testing? Like in so many other situations, we often simply revert to doing things the way we did last time. It seemed to work then, and why change something that’s working? We use what we already have in terms of test tools, test environments and test data. How often do we instead question the habits we’ve developed?

Have you ever heard – or said – “it can’t be tested”? If it’s part of the product, and expected to be used, it likely can – and probably should – be tested. But it’s easy to shy away from testing problems where there is no obvious solution and use the excuse that it “can’t be done”.

We sometimes do a great job of limiting ourselves, developing boundaries we decide can’t be crossed and creating a comfortable little testing box that we hide in. Can we lift ourselves out of that box, and should we try to?

The Solution

Testers in general, and agile testers in particular, need to get more innovative, and find new ways to test more efficiently, and effectively. I think we should start a test revolution! I want testers to be more creative and come up with new ideas, not only at execution time but also when we start planning testing. To become more creative and innovative, testers should ask for input from all team members and try to make the test planning a group effort.

Quality is a Team Effort

Quality is a team effort – it’s everyone’s responsibility. Testing itself cannot increase quality – as a tester I can inform you of potential issues, but I don’t fix them, or even decide if they should be fixed. Testers are not gatekeepers. Testers are carriers of information. Don’t forget – we all have the same objective regardless of role: building a high-quality product! We’re on the same side.

The first thing to do before we start testing is to understand why we are testing – what does quality mean to us?

Is it a product that is:

  • Functional
  • Fast
  • Reliable
  • Compatible
  • …?

Testers don’t get to make that call – it should be a collaborative activity that involves the whole team. The quality criteria that we decide are the most important will then guide the focus of testing.

Next we need to look at the product risks – what are we worried about? What do we think might go wrong? Testing can help us feel more comfortable and confident with our product by mitigating risks through testing, thereby helping to build a higher quality product. Everyone needs to contribute to the risk analysis to add his or her own unique perspective. A developer will see different risks than a tester, etc. Risk analysis is a team activity.

Time to Test

Let’s assume we have a joint understanding of what quality means to us, and we’ve agreed on a set of product risks that we want to mitigate by designing tests that can reveal the corresponding failures. Now what? How do we actually test?

This is where we should try to think more in terms of what we need and not what we have:

  • What tools do we need – not “what tools do we have”
  • How much documentation do we need – not “what documentation do we usually produce”
  • How will we generate test data – not “what data do we have”
  • What test environments do we need – not “what environments do we have”

Is this where the teamwork ends? Not at all! This is where we need to be innovative, and creative, and work together – as a team – to come up with new ideas. If we need a tool that we don’t have – how can we get it? Can we buy it, or build it?

Follow your Nose Testing

Nobel laureate Dr. Michael Smith at UBC advocated “follow-your-nose research” in his field, biotechnology; he was willing to pursue new ideas even if it meant that he had to learn new methods or technologies. Similarly testers should do “follow-your nose testing”, exploring new approaches and questioning old habits. “Following your nose” means trusting your own feelings rather than obeying rules and convention or letting yourself be influenced by other people’s opinions.

If the first question is “What do we need to test?” then the second question is “How can we do that?”. Sometimes it might not be worth the effort – or even doable – but we need to at least consider it, without letting technical obstacles, accepted so called “best practices” or culture stop us from discussing new and potentially conflicting suggestions and ideas. The tester role is not static – we need to continuously update our skills, learn new tools and methodologies. But where and how do we find these new ideas?

Brainstorming

The most common approach to generating innovation and stimulating creativity is typically brainstorming. There’s only one problem with that. Brainstorming doesn’t work.

Brainstorming is the brainchild of Alex Osborn, who introduced “using the brain to storm a creative problem” in his 1948 book “Your Creative Power”2. The most important rule of a brainstorming session is the absence of criticism and negative feedback, which of course makes it very appealing – it’s nice to get positive reinforcement.

Brainstorming was an immediate hit, and almost 70 years later we’re still using it frequently. However, in 1958 there was a study at Yale University that showed that working individually rather than in brainstorming sessions generated twice as many ideas – brainstorming was less creative!3 This has later been confirmed by numerous scientific studies.

The biggest problem with brainstorming is groupthink – our decision-making is highly influenced by our desire for conformity in the group. We tend to listen to other people’s ideas instead of thinking of our own, which also means that the first ideas mentioned get favoured. Collecting the brainstorming participants’ ideas in advance counter this to some extent, and also helps introverts take a more active part in the discussion.

The original concept of brainstorming also limits debating and criticism, but debate and criticism don’t inhibit ideas: they actually stimulate them. A debate is a formal discussion on a particular topic in which opposing arguments are put forward. The whole idea is that people participating have different opinions. Debating should be done respectfully, but the aim is not to be agreeable. Criticizing is about providing feedback, but doesn’t in itself imply that the feedback is negative. It should be a constructive approach to weighing pros and cons against each other.

Innovating Testing

Innovation is stimulated by a diverse group that debates, criticizes and questions ideas. Make sure to include everyone on the team. Problems need to be approached from different angles. Quality concerns everyone, and remember – we’re a team and we all have the same goal, so we should be cooperating and working toward the same end. That means that everyone has a vested interest in testing. And everyone should have a right to provide input on test planning. Including non-testers means including people who don’t know why something can’t be done. Innovation is about asking “Why?” and “Why not?” Let’s not limit ourselves.

Sometimes innovation requires that you dare throwing everything away and starting over. Innovation isn’t always the newest thing: innovation can be using old concepts in new combinations or new ways. Innovation is not always a huge change either – it can be small improvements too. But it is a change. Innovation is part of human nature: we’re all inherently innovative and can learn to be even more innovative. There’s no such thing as “I’m not creative enough”. Curiosity can be trained out of us, but it can also be trained back in. Keep in mind too that innovation doesn’t always lead to better ideas.

We also can’t forget that lifting the thinking outside of the box requires providing an environment where it feels safe to voice ideas that might be considered outrageous, crazy or expensive. Everyone’s opinion counts, and everyone’s opinion is valuable.

Summary

The problem in my mind is two-fold. First, there is what I see as a stagnation, which I would almost describe as compliance. Maybe testing has become compliant? Are we too prone to agreeing and obeying rules and standards, standards that are often forced upon us from outside of the testing community? Part of the solution I believe to be follow-your-nose testing. What I’m asking for is that we try to invent new approaches to testing, or re-invent testing. Is there another way of testing this? Why are we testing it this way? Why are we not testing it this way instead?

Then there is the teamwork aspect. Testing tends to often take place in isolation, even in agile environments. Testers should do most testing, and I strongly believe that there is still a tester role on an agile team, but everyone should participate in test planning. Testing just isn’t an isolated activity!

Everyone needs to work together to:

  • Define what quality means to us
  • Identify product risks
  • Provide input on test planning

If it’s all about the team, does this mean we don’t need dedicated testers anymore? No. Testers need the input from all different roles, and hence perspectives, but the actual task of testing still requires a special skill set. Testers are trained to have a different mindset than other roles. Testers have unique knowledge and experience of typical failures, weak points, etc. Testers are user advocates. But testers are not on the other side of the wall – they’re part of the team. Testers don’t break software – testers help build better software.

What about the Revolution?

A revolution is “a dramatic and wide-reaching change in the way something works or is organized or in people’s ideas about it”.

I believe we need a big change – we need to change how we think about testing. But who will start the revolution? Regardless of your current role, I want you to be part of it – help me revolutionize testing. You can make a difference – your ideas could be what will make testing take the next evolutionary step.

1 http://en.wikipedia.org/wiki/Moore%27s_law
2 Your Creative Power, Alex Osborn, Charles Scribner’s Sons, 1948
3 http://www.newyorker.com/magazine/2012/01/30/groupthink

Posted in  All, Agile Testing, Planning for Quality, Risk & Testing | Tagged , , , , | Comments Off on Let’s Start a Test Revolution

Quality Power

This post is in response to the timely and provocative question from the ASQ CEO, Bill Troy.  In his View From The Q post, the challenge is made to members of the Quality community to expand their capabilities to include leadership.

The Quality profession has two sources of influence and power from which it must deliver its intended function:

  • Inspirational power derived from the passionate convictions of sincere appeals to a shared positive mission and vision. (Transformational)
Inspirational Leadership
  • Punitive power gained by enforcing specifications, requirements, regulations, and compliance to official policies. (Control)
Punative Leadership

In my view, leadership capability is a necessity.  Our profession has evolved from being control-oriented and reactive (consider that the original name of our organization was American Society for Quality Control) to being transformation-driven.  The ASQC-era definitions of quality are summarized below:

  • Conformance to requirements
  • Fitness for use and purpose
Quality Control

These reflect restraining forces, where the Quality function would provide data collection and process control services to refine existing work, and prevent undesirable outcomes.  In this context, Quality would exert power primarily as an enforcer of customer requirements and industry regulations.

In the current ASQ-era, a grander vision is sought, which extends to driving forces of organizational excellence, business transformation, social responsibility, and personal achievement.  Leadership is required for the successful mobilization and transition from the status quo to a higher level of greater enlightenment, versatility, and robust operations.

Unlike the control mentality, where influence is built upon defensive reactions to potential maladies; the transformation mentality requires its influence to be drawn from leadership characteristics and principles:

  • An optimistic and competitive vision of the future
  • A call to urgent action and consistent accomplishment
  • Alignment of common priorities with the promise of mutual gains
  • Pursuit of innovative breakthroughs and evolutionary advancement
  • Continual motivation to overcome obstacles and challenges
Quality Leadership

Without a determined and deliberate approach, and the proper application of effective and contextually appropriate leadership methods and techniques, the necessary transformations and aspirations of excellence will not occur.  Quality without leadership is reduced to a bureaucratic function, tolerated only by its minimal necessity.  In contrast, when visionary leadership is applied, Quality becomes the driver of all organizational activities, and the foundation of personal and professional success and excellence.

Posted in  All, Planning for Quality, Team Building | Tagged , , | Comments Off on Quality Power

Can Playing Games Improve Tester Skills? Exploring the Science Behind Games

Are gamers predisposed to careers in software testing? The prevalent perception seems to be that testers enjoy playing games more than the general population, and that playing games makes us better testers by honing cognitive skills, which especially important in our field. Can it be that people who enjoy games, riddles and puzzles are indeed better equipped to handle challenging software testing tasks?

Reading blogs and tweets, and listening to presentations and discussions at conferences, there is a hypothesis – often treated as a theory – that playing games is an important part of gaining and improving tester skills, but there is a disconcerting lack of empirical evidence accompanying such claims. Our job as testers is to question unsupported statements and to put hypotheses to the test, therefore I decided to go on a quest to discover if there is any scientific evidence that playing games can improve tester skills.

Defining Games and Play

“Play” and “games” mean very different things to different people, and “play” especially is an emotionally loaded word. Play is generally thought of as a voluntary activity that is fun and where the primary purpose is not to achieve a specific goal. Unfortunately, play is often contrasted against work, but play is important in the work environment too. Play encourages teamwork, helps us build relationships and promotes creativity and innovation. Play is a way to learn and grow, both as individuals and teams, and should in itself never be considered a waste of time1. When I use the phrase “playing games” in this article, my definition of “games” is very broad, including board games, puzzle games, riddles, video games, etc.

Before trying to answer if there is any scientific evidence suggesting that playing certain games can improve tester skills, we need to break down the question and look at one part at the time:

  1. Can cognitive skills be improved at all, regardless of method?
  2. Can playing games improve cognitive skills?

Neuroplasticity

Historically, the brain has been seen as static, a hardwired computer whose circuits are finalized in our childhood, but it turns out that the brain is anything but static – the brain is plastic and can be rewired. Neuroplasticity refers to changes in neural pathways and synapses due to changes in behavior, environment and neural processes. It is possible to change both the anatomy (the structure) and the physiology (the functional organization) of the brain. In other words, science indicates that at least the brain can change, which means that it is plausible that cognitive skills can be improved. But can playing games change the brain?

Current Research

As a layman, gaining insight in to today’s research on the relationship between playing games and neural development is not easy. Most studies are published in journals that the public does not have access to, and what appears in public media and easily available literature tends to be skewed and sensational. There are also a lot of readily available studies done by the very companies that make a livelihood out of providing commercial brain-training games. It’s hard to believe that these studies follow the scientific method and are not biased.

To make matters even worse, a lot of studies do not use a control group, and how do you design a control group for, as an example, a study on the results of playing Mastermind? What do you have the control group do? There are also studies on children that are quite long, which makes it hard to know if the improvements seen are from playing the game, or just natural learning that could happen in six or twelve months. Finally, there is the problem of sample sizes. Many studies use groups of just a dozen or so people and, as a consequence, the statistic reliability of the results is very low.

Reasoning Skills and Speed Training

I found one study2 on children’s learning that looked at reasoning skills and processing speed. Reasoning skills account for our ability to plan and build new relations between elements, and processing speed measures rapidness of visual detection. Reasoning represents the capacity to think logically and solve problems in novel situations, which is a very important skill for testers that are constantly faced with new and challenging situations and software behaviours we have not seen before. Processing speed corresponds to our ability to quickly process inputs and, in particular, perform quick, visual searches, something tester are frequently doing while working with a GUI or checking log files.

In the study, a group of 7-9 year olds played games for two hours per week for eight weeks. One group played twelve different reasoning games, computerized and non-computerized, individual and collaborative. The second group played twelve speed of processing games.

The children’s cognitive skills showed large improvements, and the group that played reasoning games only improved their reasoning skills, whereas the group that played speed of processing games only improved their processing speed, which shows that the processes are independent. The results of the study implies that reasoning and processing speed can be improved, at least in children, and those skills are important to testers. But does it transfer to adults? A lot of the research turned out to be focused on children’s learning, but I did find an interesting experiment that was focused on adults – Brain Test Britain.

Brain Test Britain

The Brain Test Britain experiment3 was conducted through Lab UK4, which is a BBC website that encourages citizen science and invites the public to participate in groundbreaking scientific experiments. The Brain Test Britain experiment was a full clinical study launched in 2009, designed by researchers at the University of Cambridge and Kings College in London.

13,000 people spent ten minutes three times a week for six weeks playing brain-training games. The participants were split into three groups:

  • One group playing reasoning games
  • One group playing non-reasoning games
  • One control group

This study only looked at computer-based games, and the control group did tasks that involved using the Internet but not any actual brain training.

The study found no evidence that playing brain training games transfers to other brain skills. “Practice makes perfect” – playing a specific game makes you better at…that particular game, but playing games doesn’t make your smarter, or boost your brain power.

Conclusions

I looked at additional studies with completely different setups, using both computerized and non-computerized games, with people playing games both individually and collaboratively. There is definitely scientific support for the idea that playing games can improve cognitive skills such as:

  • Procedural skills
  • Debugging skills
  • Visual search skills
  • Attention skills
  • Reasoning skills
  • Processing speed

However, the transfer out of context appears to be limited or even non-existing, and the reliability of the studies can be questioned. Nonetheless, there seems to be no doubt that the brain can change. So, is there any point in playing games or should we stop playing? Whether playing games makes us better testers or not, there are other benefits to consider too:

  • It’s FUN!
  • Games can be used to provide a safe environment in which people can learn to be skeptical and questioning.

I set out to decide if there was any empirical evidence that playing games makes us better testers, and even though I think the information I found was inconclusive, I will most certainly continue playing games.

1 “Play: How it Shapes the Brain, Opens the Imagination, and Invigorates the Soul”, Stuart Brown, Penguin Group, 2009
2 Developmental Science 14:3 (2011), pp 582–590
3 https://ssl.bbc.co.uk/labuk/experiments/braintestbritain/
4 https://ssl.bbc.co.uk/labuk/

Posted in  All, Other, Team Building | Tagged , , , | Comments Off on Can Playing Games Improve Tester Skills? Exploring the Science Behind Games

Finding Bugs Under Software Rocks – Exploratory Testing as a Bug Hunt

One aspect of software testing is trying to find as many of the bugs in the software under test as possible and, in that sense, software testing can be viewed as a “Bug Hunt”. As a metric for evaluating the effectiveness of our “Bug Hunts”, we can measure the total number of bugs found, with more value being given to bugs of a higher severity. We should pay careful attention to where in the software we’re looking for bugs by selecting particular testing styles in our test strategies. One testing style we often find ourselves using is scripted testing, which brings with it a certain kind of a “Bug Hunt” that is different than when using exploratory testing. When we do find ourselves engaged in scripted testing, do we still allow ourselves to look under any rocks in our software for hidden bugs, or are we being so rigid in how closely we follow the steps in our test cases that we’re essentially limiting ourselves to only finding bugs “hiding in plain sight”?

A Few Term Definitions

We think of software testing as having two main styles – scripted testing and exploratory testing – but it is really more of a spectrum of testing styles rather than two distinct styles. We don’t have to limit ourselves to thinking of testing as being either purely one or the other, but instead we can consider where on this spectrum of testing styles we may want to land, both while we are devising our test strategies and while we are performing our testing.

Scripted Testing: This testing style, in its purest form, involves separating the activity of designing our tests from the activity of executing them. Our test cases are often detailed down to each test step that will be executed and the expected results of each step. Our completed test cases are then executed, as they were designed.

Exploratory Testing: This testing style involves merging the activity of designing our tests with the activity of executing them into a single activity where the test design and the test execution guide each other as we explore specific areas of the software.

A Comparison of Styles

Considering the above definitions of scripted and exploratory testing, we can begin to see, through comparison, that each testing style may have distinct strengths and weaknesses when we use them for “Bug Hunting”.

Scripted testing is a structured activity often based on validating the software requirements and related software design documentation. While creating our test cases, we consider the bugs we expect to find and where we expect to find them. On executing our test cases, it’s likely we’ll find exactly the kinds of bugs the test cases were designed to find in exactly the places we’re expecting to find them. It’s also likely that any kinds of bugs we hadn’t expected to find while designing our test cases in places we hadn’t expected to find them will go unnoticed. Additionally, the bug finding potential of particular test cases will be exhausted if we continue executing them, since our testing will essentially remain focused on hunting for bugs that have already been caught.

Exploratory testing is an investigative activity of changing our test activities based on what we actually observe while we’re interacting with the software and then following those observations wherever they may lead us, which we hope will be places in our software where bugs are hiding, just waiting for someone to come along and find them. Software requirements and related software design documentation still guide our testing, however, the primary factors for finding bugs during testing are simply the tester’s own observations from interacting with the software, combined with their intuition and prior experience regarding where to seek out potential bugs.

I’d like to propose that it’s entirely possible to perform scripted testing using an exploratory mindset if we allow ourselves to use our instincts to know when straying a bit from a scripted path may uncover additional bugs that wouldn’t otherwise be uncovered.

Staying on the Path or Straying from the Path

Imagine we’re observing a pair of software testers on an adventure through the first build of an application and that they are going to perform manual scripted testing. Let’s think of the software as a bug-infested forest that’s waiting to be explored, and this round of the manual test effort is a purposeful walk through that forest seeking out these bugs.

Tester number one follows very closely to each of the paths that have been laid out in the test cases. He or she will likely find all of the bugs that are “hiding in plain sight”, in the precise spots where each of those test cases has been designed to find them. Tester one is also able to execute the testing at a fairly steady pace, thanks to following so closely to the scripted path of each test case.

Tester number two isn’t following the paths defined in the test cases as closely as tester one. A few of the test cases lead down minor variations of paths that tester number one has already been down without finding any bugs. Tester number two spots something kind of odd out of the corner of his or her eye. Tester two has been doing a lot of that while testing, which is likely why he or she hasn’t been proceeding at a rapid pace through the assigned test cases. In fact, tester number one also briefly spotted this same oddity, but since he or she was focused on following the path in each test case, he or she didn’t pay too much attention to it. Tester number two pursues this distraction, the test case execution having been put on hold while leaving the scripted path for a bit of exploration in this particular area. Sure enough, that oddity is a rock, and there’s something strange about the way it’s moving. It’s at this point that our explorer picks up the rock, curious to see what may lie underneath it. Ah hah, it’s a bug! The bug list grows by one. In fact, this turns out to be a major bug which was much better found sooner rather than later.

I leave you with some points to consider as we try to evaluate the pros and cons of scripted and exploratory testing. Considering that tester number two found the only major bug during this first round of testing, could we conclude that their “Bug Hunt” was the more successful one, even if they didn’t execute as many test cases as tester number one? Considering the outcome of this round of testing, how could this experience influence your test strategy going forward?

Posted in  All, Agile Testing, Test Planning & Strategy | Tagged , , | Comments Off on Finding Bugs Under Software Rocks – Exploratory Testing as a Bug Hunt

Strategy: Surviving Contact by Using Mobilization and Governance

This post is in response to the latest View from the Q blog on Strategy.

I don’t have anything to add to the formation of a strategy, as Bill Troy addressed this subject concisely and effectively.  Instead I will place focus on the next step, namely to follow through and execute upon the defined strategy.

I am reminded of a quote by the noted German military leader and strategist, Helmuth Von Moltke.

“No battle plan survives first contact with the enemy”

This reminds us of the importance not only of crafting a suitable strategy, but managing its execution in such a way that the overall purpose of that strategy can be fulfilled, even in changing circumstances.

Consider the recent release by Apple of its iPhone 6 from the context of its competitors Nokia and Blackberry (Research In Motion).

iphone6While, there is no doubt that all three organizations had a strategy, the presence and eventual predominance of Apple had a devastating effect on both Nokia and Blackberry.

nokia

blackberry

Even if the technology solutions are equivalent between the companies, the marketing and promotional advantages of Apple have transcended technology to instill a memorable and indelible impression in popular culture. For example, the involvement and engagement of the band U2 has drawn considerable interest and traffic to Apple in a way that neither Nokia nor Blackberry could have imagined.

U2 and AppleSo what is needed to ensure that our strategies survive contact with the enemy, which in the context of a business strategy would be a competitor?

I propose three key practices: Mission, Mobilization, and Governance.

MISSION

Having a clear mission which defines the vision of success is essential to have all of the participants overcome their petty differences and jointly embrace and pursue the common good.  The greater the mission, the more compelling the dedication and devotion will be to the fulfillment and achievement of that mission.  For this reason, it is essential that the mission and vision align with the values of the organization and its people.

mission and qualityIn the absence of clear and unifying mission and vision objectives, the Quality profession tends to fall into a destructive pattern which I refer to as Quality “Indig” Nation.  Rather than focus on the tasks at hand, the participants will devolve into these damaging archetypes which can impair and undermine the overall strategy.

Cynics: The quest for defects and faults will exceed into farcical proportions so that the Quality function will be a voice of negativity and defeat.  While everyone else within the organization is pursuing the objectives, the Quality function takes upon itself the role of contrarian, repeated expressing why “it will never work”.

Purists: The dogmatic natures of certain “believers” will oppose any adaptation or modification if it is deemed “apocryphal” and violates the Canon of the “Holy Saints” Deming, Toyota, ISO standards, or whatever technical reference is deemed scripture.  Spirited and emotional debates can consume valuable time and energy without accomplishing or fulfilling the strategy objectives.

sh*t lean sigma saysTribalists: The Quality profession does not have a homogeneous background, but has evolved through the combined efforts of different types of practitioners with their particular expertise.  However when one of these groups loses sight of the greater good and mutual respect, and seeks to dominate at the expense of the others, it is like the person with the hammer who sees all problems as a nail.  Rather than addressing the overall problems, the emphasis is on promoting a particular concept or set of practices, and opposing balance and diversity of the solution.

Blockers: There are those who have invested so much of their time an energy into a particular management system that they are resistant to change, lest a newly introduced dynamic threaten the sanctity and continuity. The resistance to change will inhibit progress, innovation, and prevent organizations from making the necessary adjustments and adaptations.

Esoterics: Often the work of Quality practitioners is accompanied by unusual terms and trends.  Engineers and project managers often adopt the nomenclature of spiritual pursuits or martial arts.  For example, one does not have to be a Sensei, a Black Belt, nor a Guru in order to track late deliveries using a control chart.

From a Quality perspective, the object is not to flaunt your knowledge and expertise, but simply to use the best mix of techniques and methods to fulfill the mission in a way that supports the greater good of the organization.  As a profession, we must overcome our negative archetypes to serve and lead in the effective execution of our defined strategies.

MOBILIZATION

In order for the strategy to be effectively deployed, the necessary capital, materials, and resources should first be identified, procured, acquired, or obtained.  One cannot assume that everything will be available on demand for the duration, and often a business case is needed to justify why this strategy requires the priority allocation of goods and services.

An improper mobilization could actually be more harmful in the execution of the strategy, as interrupted or incomplete work will be viewed negatively as a failure of the overall strategy.  Nothing succeeds like success, so mobilization is important to ensure a continuous set of progressive victories.

symphonySymphony of Work: This approach requires that all participants are aware of their particular part in the overall effort.  In this context, the timing is as important as the overall fulfillment, as there are many interdependencies.

Coordinated Advancement: The advancement must be coordinated to ensure that all components are able to make the necessary gains.  In some cases, this requires adjustments which are not favorable to the high performing units, who must divert from original objectives to support their peers for the greater good.  A military example of this was found when Gen. George Patton, having led the 3rd Army through France, was summoned to redirect his army north in order to assist with the Battle of the Bulge.

Constraints Management:  This practice, as referenced in Goldratt’s reference, The Goal, advises that progress and pace are determined by the constraint or bottleneck within the system.  By managing the bottlenecks or critical project paths, the overall speed will improve.

Effective mobilization will help the strategy survive contact by ensuring adequate materials and resources, and coordinating the logistics and constraints to overcome interruptions and improve progress.

GOVERNANCE

Governance is essential not only to strategy but also to Quality.  Without Governance, the lack of transparency and visibility of actual results and accomplishments will lead to future estimates being based on inaccurate assumptions.

governance frameworkAccountability:  In any venture, it is important to have a work breakdown structure which clearly reveals the parties responsible and accountable for fulfillment of that strategy.  From this structure and framework, the interdependencies are revealed and can be managed.  With accountability, expectations can be refined so that future estimates will be more accurate.

Action: When the Governance function reports that the strategy may be compromised, the execution steps must change.  The organization should determine whether this requires an adjustment in resources, or a particular mitigation or contingency step.  Once the actions have been identified, they must be effectively mobilized and rapidly deployed.

Adaptation: It may be necessary to shift strategies within the overall environment.  This may be done in response to both negative risks and positive breakthroughs.  For example, Pfizer had developed Sildenafil as a treatment for hypertension and angina.  However after clinical trials revealed a particular side effect, Pfizer marketed this drug as Viagra, which became a highly profitable medication to combat erectile dysfunction.

viagraAcceleration:  If the strategy is being successfully deployed, the the next step is to accelerate the implementation to increase the scope, scale, and impact of the strategy.

By following a strategy with a compelling Mission, adept Mobilization, and responsive Governance, the rate of successful fulfillment will be substantially improved.

Posted in  All, Business of Testing, Planning for Quality | Tagged , | Comments Off on Strategy: Surviving Contact by Using Mobilization and Governance

Evolution – Lean Six Sigma Examples

There is a discussion as to whether the future of Quality will progress along an Evolutionary or Revolutionary path.  This will show how Quality has evolved from its origins to is present form.

One observation I would like to share is the convergence of practices into a single collection of knowledge.  ASQ has compiled a QBok (Quality Body of Knowledge) from which practitioners can draw and apply their expertise to help companies achieve new levels of performance, quality, cost, delivery, and assurance.  This convergence is evident in the Lean Six Sigma domain.

In the recent (2009) ASQ Press publication, The Public Health Quality Improvement Handbook by Ron Bialek, Grace Duffy, and John Moran, there is an excellent visual display showing how what we call Lean Six Sigma evolved from various influences.

Lean Six Sigma EvolutionThis document reflects a convergence of practices into a common category.  This viewpoint is corroborated in similar peer-reviewed publications.

Juran is cited as one of the early influences of the Quality profession.  The prolific and pertinent contributions from Juran have been a cornerstone of our profession, and predated the concepts of both Lean and Six Sigma. The passage below is from page 748 of Juran’s Quality Handbook, 6th Edition, and expresses the high-level distinctions between what we term as Lean and Six Sigma. This reference aligns with the diagram above.

“Lean Six Sigma is a combination of both Lean and Six Sigma quality approaches.  The underlying tenet of the Lean approach is efficiency, whereas that of Six Sigma is effectiveness.  The integration of the two approaches provides a balanced approach to quality.  By applying the Lean tools, the processes become stable, constraints and costs to operations are reduced, and the speed is optimized.  Six sigma tools can then be applied to identify key variables in the process, establishing operating ranges, and implement control methods to ensure the problems are corrected.”

The Public Health Quality Improvement Handbook elaborates on the distinctions between Lean and Six Sigma, both in text and visually.

House of LeanThe “House of Lean” captures the key practices and characteristics of Lean, and also establishes the scope and limitations of Lean as a singular approach.  For overall improvement, Lean is not the sole solution, but one of several practices recommended by the authors.

DMAICMore complex improvement initiative require the thoughtful and deliberate diagnostic approaches reflective of a Six Sigma project, as shown in the image of the Define-Measure-Analyze-Improve-Control cycle.  One particular question within the Improve category (What specific activities are necessary to meet the project’s goals) channel directly into the proven, tactical Lean techniques and practices.  In this way, Six Sigma and Lean are not conflicting, but complementary and synergistic activities that when conjoined, enhance and expand their respective effectiveness.

LSS ToolsBy having a unified Body of Knowledge, the respective advantages of Lean and Six Sigma can be combined. Rather than mutually exclusive “either/or” scenarios, the Quality practitioner can select and apply the most appropriate and relevant practices to fit the situation and serve the best interests of the client or organization.  Many of the practices listed in the table above are common (i.e. failure mode and effects analysis, run charts, five whys method) or use common methods (i.e. brainstorming, kaizen).

If this is reflective of the overall Quality profession, then it shows that over time, Quality will continue to evolve to incorporate more complementary practices, thus becoming more robust, relevant, and capable of serving our constituents.

Posted in  All, Planning for Quality | Tagged , | Comments Off on Evolution – Lean Six Sigma Examples

Evolution along the Quality Continuum

In response to the question posed in the recent View From The Q concerning whether the progression of Quality will follow an Evolutionary or Revolutionary path, I will first describe my vision of where the Quality domain will be in fifty years.

The Quality Continuum shows the progression of our profession to its present state, and sets the trends for future expectations.  The changing nature is reflective of many considerations which include:

  • technological advancements in communication and data visualization for decision support
  • increased education and sophistication among employees and customers
  • increased options and opportunities through globalization
  • divergence of mentalities and attitudes across generations to the Millennials and beyond
  • the fluidity and flexibility of future enterprises relative to “brick and mortar” legacy operations

Based on where Quality has been, where it is today, and where the trends are leading, my extrapolations will generate a profile of an aggregation that is distinctly different from what we know today.

BUSINESS OPERATORS

  • From CONTROL to PERFORMANCE EXCELLENCE, Quality is moving toward emphasizing INNOVATION
  • From OPERATIONAL to CUSTOMER-DRIVEN, Quality is moving toward being MARKET CREATORS
  • From COST CENTERS to COST REDUCTION AGENTS, Quality is moving to becoming a MONETIZED PROFIT CENTER

METHODOLOGY

  • From PRESCRIBED to ADAPTIVE, Quality is transitioning to be CREATIVE
  • From RESPONSIVE to PREDICTIVE, Quality is assuming a role of GOVERNANCE
  • From MANUAL to AUTOMATED, Quality methods will be INTUITIVE

MENTALITY

  • From LIMITED to EXPANDED, participation in Quality will be UNIVERSAL
  • From RESTRICTED to FLEXIBLE, Quality will be UNLEASHED
  • From PASSIVE to ACTIVE, Quality will be DRIVEN

My personal inclination is that our profession has evolved over the last several decades, and will continue to evolve into the future.  I have great enthusiasm for the future of our profession, which I believe will be characterized by optimal levels of self-determination and meaningful fulfillment of personal and professional aspirations.

With every transition and transformation come a resistance to those changes that must be overcome in order for the changes to successfully materialize.  When rational appeals to logic and economic advantage fall on deaf ears and closed minds, then more assertive “revolutionary tactics” must be applied, lest the vision and wisdom be obscured and forgotten.

The evolution of Quality is evident and irreversible.  Whether the desired future state of Quality comes to fruition depends on the passion and conviction of our leading professionals and their willingness to unleash the creativity within to revolutionize our global experience.

Posted in  All, Planning for Quality | Tagged , | Comments Off on Evolution along the Quality Continuum

Seeing What Isn’t There – The Use of Heuristics

Have you ever been about to document a defect discovered in an application under test and wondered how to best compose your words? You want to make sure the defect is clear to the stakeholders, making sure it gets the attention it deserves. If you, like me, sometimes have trouble articulating your thoughts and being persuasive, then a useful tool is “heuristics”. An example of a heuristic is “Consistent with the product’s history”, meaning the present version of the product is consistent with past versions of itself. Derived from the Greek “heuriskien” that means “find or discover”, heuristics are best suited for situations where the issues are not black or white, but occurring in the implicit areas that are somewhere in-between, and where there can be a definite lack of specifications. As Michael Bolton pointed out in “Testing Without a Map”, ““Completeness” is entirely dependent upon perspective and context. Even so-called “complete” specifications contain much that is implicit. Some specifications are not supplied in formal documents, but come to you through e-mails, conversations, or through your own inferences.”1

The dictionary definition of a heuristic, or “rule of thumb”, is a guideline serving to indicate or point out and encourage a person to learn, discover, understand or solve problems on his or her own, as by experimenting, evaluating possible answers or solutions. They can add weight to your findings and really turn a defect report into a persuasive document. As the Association for Software Testing2 website points out, “The key goal of the bug report author is to provide high-quality, well-written, information to help stakeholders make wise decisions about which bugs to fix when.” You are doing all this to make sure that the bug reports do not wind up being just neutral technical reports.

There are several heuristics that are applicable to testing any kind of product, whether it is software or a service. A heuristic is simply a guideline used to determine whether a given test may pass or fail.

Since heuristics are a tool, they do not come with a guarantee that they will give you the right answer. Sometimes, certain heuristics can contradict other valid heuristics. They can only point you to a potential problem and, in doing so, aid in making a decision. They are not comprehensive. Heuristics help us recognize problems but they don’t help us solve them; they are something to consider. There are plenty of other ways to decide whether a product is acceptable or not.

The original list of heuristics that testers are familiar with comes from James Bach3. To easily remember the heuristics, James came up with the mnemonic “HICCUPPSF” which stands for:

  • Consistent with the product’s history – The present version of the product is consistent with past versions of itself, meaning a product’s features and functionality should be consistent with its past behaviour.
  • Consistent with the products image – The product is consistent with the image its makers want to project to its customers or user. This is also known as “branding”. Customers can build strong emotional attachments to products so the experience should be seamless from version to version.
  • Consistent with comparable products – The product is consistent with a comparable one; i.e. its closest competitors. You want to have a rich feature set that is equal to or, ideally, better than your competitors.
  • Consistent with claims – The product must behave the way the marketing team claims it will. These claims can be made through literature, specifications, help files and conversations or emails.
  • Consistent with user’s expectations – Is the product consistent with what we think the user wants?  What they can reasonably expect?
  • Consistent with purpose – This would include both explicit4 (precisely and clearly expressed) and implicit5 (suggested though not directly expressed) purposes of the product. Microsoft Word offers a rich set of formatting features. Notepad does not. The two applications serve different purposes which must be kept in mind while testing.
  • Consistent within product – Each feature of the product is consistent with comparable features in the same product i.e. ‘look and feel’ is consistent.
  • Consistent with statutes, regulations and binding specifications – Does the product abide by applicable laws and statutes? Does it comply with legal requirements and restrictions? “These differ in that they are imposed on developers by outside organizations.”6
  • Consistent with familiar problems – Does a problem from an earlier version of the product still exist? Has it been deferred because it is irrelevant, obscure, or has it been mistakenly considered as having no customer impact?

I can remember testing changes made to a major financial institution’s website and finding some glaring differences between the English and French versions of the same site. The development of the English site had been contracted to one company, while the development of the French site had been contracted to another. This had obviously been done to make sure the content was accurate in both sites and official languages, for users to better understand financial matters that concerned them. Being developed independently, functionality was slightly different, embedded links were in different places, etc. When I documented these defects and submitted them for review, I argued that there was no consistency within the product which could adversely affect the user’s perception of the company. As the major stakeholders agreed with my conclusions, the defects were upgraded to critical and fixed before the site changes went live. Consistency within the product was important, especially in a bilingual country.

Using these heuristics, testers can see not only what is there, but also what might be missing. Something expected that is missing might threaten, or at the very least be detrimental to, the value of the product. To summarize, heuristics are invaluable tools that help testers provide developers, stakeholders and everyone else with as much well written information as possible to make informed decisions about which defects to fix. They have been in use for as long as there have been products to test which is a testament (pardon the pun) to their relevance.

1http://www.developsense.com/articles/2005-01-TestingWithoutAMap.pdf
2http://www.associationforsoftwaretesting.org
3http://en.wikipedia.org/wiki/James_Marcus_Bach
4http://www.oxforddictionaries.com/definition/english/explicit?q=explicit
5http://www.oxforddictionaries.com/definition/english/implicit?q=implicit
6http://www.testingeducation.org/BBST/bugadvocacy/BugAdvocacy2008D.wmv

Posted in  All, Planning for Quality | Tagged , | Comments Off on Seeing What Isn’t There – The Use of Heuristics

Shorter Interval Projects Are Trending, But What Does That Mean For Testers?

I find it interesting that software development projects are generally being planned and executed with shorter time frames than before. Software development projects that were once taken on as a single large project are now being broken down into smaller sub-projects. Project releases that might have taken a year or longer are now being rolled out more frequently in smaller increments. So what is driving this trend and what does it mean for software testers?

Thinking in incremental improvements

How do you approach a task like searching online? What is your strategy? If you decided to search using the same strategy as some traditional software projects, like Waterfall, you might spend some time in advance planning your search terms. You might consider what you know about your search topic and how Internet search works. Once you have completed the planning phase, you might proceed to execute your search. Confident that careful upfront planning means your search term was ideal, the final step is to go through the results pages until you find what you are looking for.

No one searches like that. Typically, only a minimal amount of consideration is put into a search term, and rarely would we look at the results beyond the first or second page. Instead of looking through pages of results, we would typically prefer to refine and improve the search terms and then search again. This type of incremental improvement comes naturally to us and people seem to think it works better. What if we applied this concept to software development projects and settled on only the basic requirements and then refined them later rather than plan out every detail we can think of? This is Agile.

Thinking in short intervals

Attention spans may be shorter than they were before. Have shows like Sesame Street trained us to think in smaller chunks rather than having to absorb a huge concept? How do you figure out the best way to do something new? Is planning everything out in advance important, or is it sufficient to only understand the overall big picture? Try the Lemon Toss1 game and take notice of what your natural approach is.

lemon toss game

Do you think you and your team would score higher with option A or B? Would you be surprised that more people would feel more comfortable with option B? Breaking the game up into smaller chunks and having the ability to revise your strategy throughout is becoming more typical of how a software development project is typically set up. There are many parallels between this game and software projects:

  • Game rules vs. software requirements
  • Scoring points vs. acceptance criteria
  • Game option A or B vs. Waterfall or Agile

More people find it natural to think in short intervals and to improve incrementally. It’s easier to take on a large project by breaking it into smaller manageable chunks and, indeed, this appears to be a trend in how software projects are being structured. The overall big picture of the project is important, but working out the details for every phase before starting the work is not as natural. That would be like planning your search terms in advance, or playing the lemon game with option A. It goes against how we naturally would think.

How we think naturally goes hand-in-hand with shorter projects and Agile methodologies. It’s all the same thing: Thinking in shorter intervals with incremental improvements. So, as the trend continues and large projects are broken into smaller sub-projects, what are some things to consider?

A 9 month project today was a 12 month project before. Why? Because it took 12 months last time and, because of Agile, we think it is going to go faster this time. It seems that Agile is set up to help us move faster because we can negotiate the scope and schedule with stakeholders sooner. However, though we choose Agile for speed that is not what we usually end up getting. What we do get is a clearer picture of why it isn’t going faster and a better understanding of what needs to be prioritized or cut.2

So what does this mean for software testers?

As testers adapt to shorter projects, does anything change from our perspective? Testers, just like programmers, business analysts, and project managers, might need to adapt to new technologies and become more comfortable with new concepts to meet the demands of shorter schedules. Communication on shorter projects might have different needs as well, but the basics of designing good tests remain as important as ever.  One approach to testing that becomes especially useful in Agile is to test around the changes, whether they are changes in code, features, or customer expectations. Testing around the changes can help make the process manageable as it goes faster. Using test automation can be a key part to managing those changes over the multiple sprints in a project and at an earlier stage in the software development lifecycle.

However, are we in danger of losing something important as we move toward shorter projects? Try playing the lemon toss game again, but play with option A. You might find that with the amount of planning time given that you find it useful to write out your strategy for everyone to see, especially if the group you are playing with is larger. Compare this to option B where most of the communication is verbal and the urge to write down changes to your strategy between rounds is low. Overall, the “Agile option”, option B, may be more natural to you and you might end up scoring more points, but what will happen once the game is over? Will the next team be able to pick it up in six months and benefit from all of your team’s lemon tossing insights?

That is food (or lemon) for thought.

1 Adapted from Boris Gloger, The Scrum Ball-Point Game http://borisgloger.com/2008/03/15/the-scrum-ball-point-game/
2 Lanette Creamer at 2014 Quality in Agile Conference – Small Agile Projects (Delivering Quality in 3 weeks to 9 months) – http://agilevancouver.ca/index.php/events-in-2014/2014-quality-in-agile/sessions

Posted in  All, Agile Testing, Risk & Testing, Test Planning & Strategy | Tagged , , , , , | Comments Off on Shorter Interval Projects Are Trending, But What Does That Mean For Testers?