We Are Not The Quality Police

I started my testing career as part of a user group that did analysis, technical writing, training and testing in the Systems Department of an insurance company at a time when having nontechnical people involved in software development was very unusual.  We provided informal requirements and told the programmers when the results were wrong.  We tested based on things we had done ourselves in the branches, starting at a very basic function level as the code was written and continuing through to end to end testing.  At that time, none of our applications were integrated.  Management decided when the system was due, based partly on advice from our supervisor and partly on when it was needed.

Some years later, finally in an environment where testing was a job and a career path, I registered for a certification program that covered software quality from many different angles, including, but not limited to, testing.  My formal training in IT was very limited and I studied a lot of different areas.  Like most certifications, this one had predefined right answers for the exam.  Unlike many of them, you had to figure out what the answer was yourself.  The organization was very clear on one point: there was a right way to do things.  I finished the course with a lot more knowledge, a better idea of what I was doing and a set of blinders that came from knowing I was the Quality Police.  My job, in my mind, was to ensure that things were done properly and I tried to do it well.  I fully expected that an application would go in to Production when, and only when, the Test Team approved it.

Some of us aren’t very good at wearing blinders; after a while, mine became uncomfortable.  Do you ever think about why we build software?  The popular answer seems to change every few years.  Sometimes there is a strong belief that the craft itself is more important – and certainly more interesting – than the use of the application to the people who pay for it.  That becomes a hard sell eventually, and we swing back to the other extreme, where business departments hire their own technical staff to do things the way they are told.  Technical debt causes enough trouble after a while that we swing back the other way, again to an extreme.

We are dependent enough upon technology now that IT can’t be in competition with the business if we want to succeed.  We need perspective, an understanding on both sides of why each group’s opinion has value.  Some things have to go in on the specified date, whether or not we like it, things like tax rate changes and any sort of regulatory change.  Some applications are life critical.  Others are business critical; lives may not be lost but the company may go out of business.  Customer expectations drive any web application and the quality often reflects the speed.

I have gone from seeing myself as the Quality Police to seeing myself as a reporter.  My job is to report what I see and what I don’t see.  Sometimes, part of that is saying that we have not even looked at some areas, along with my assessment of how high a risk that creates.  Sometimes, it involves arguing over testing scope with a project manager who does not want to look past the edge of our application or our change to see what the affect might be on another application and the company as a whole.  Standing on a soapbox saying ‘I am the Quality Police and you have to do what I say’ has never been very effective.   Soapboxes in the middle of the road only create a road block.  We all do better if we hear about things early, rather than at the last minute.  We all do better when we are listened to, both from the business perspective and the IT perspective.  We all do better when we have choices, and we need to provide alternatives.  As reporters, I think that our real job is communicating with all concerned with respect for their positions and understanding of their problems from their own point of view.  We must be reporters who remember why we do this work if we want to do it well.

The risk is to the business and the decision should be theirs.  It needs to be an informed decision.  Businesses that hire and listen to reporters have a better chance of success.   We need to work together to succeed.

Posted in  All, Business of Testing, Team Building | Tagged , , , | Comments Off on We Are Not The Quality Police

Risk Clustering – Project-Killing Risks of Doom

Our home shines dimly with reflected sunlight, a pale blue dot in the expansiveness of space.  Around us, dark shapes move through the skies, in near countless number.  Some few trend in our direction.  Not seeking, but tending inevitably toward us all the same.  A still smaller number of these will collide brightly upon our atmospheric shield.  In a rare while, one burrows past to impact heavily upon the body of our planet.  Will the next such be a planet-killer?

Watching Near Earth Objects (NEOs)

Consider: Asteroids are one type of risk to the viability of “project” Earth – Did you know?

2013 Chelyabinsk Meteor

By Alex Alishevskikh (Flickr: Meteor trace)
[CC BY-SA 2.0], via Wikimedia Commons

Approximately 10,000 NEOs are currently being tracked.  More objects are continuing to be discovered.  Almost daily impacts on the Earth’s surface.  Seems like we should be worried!

What are we doing to safeguard our only home from this external risk?  Is it enough?  … Do we care?

Why Aren’t You Listening?

What does this all have to do with your software project?  Well, have you ever been on a project where you felt it might be doomed if something isn’t done?  Did you tell anyone?  What was the reaction?  What about after the 10th time?

After a while, doomsayers and the dooms they say-on about tend to be tuned-out – especially perhaps if the doom never happens (“I’m not going to fall for that old line”) or if it regularly happens (“We came through it ok last time”).

So how can you clearly point out real Project-Killing Risks of Doom and make sure they get the attention they need?

Risk Management Basics

Risk in software can be defined as the combination of the Likelihood of a problem occurring and the Impact of the problem if it were to occur, where a “problem” is any outcome that may seriously threaten the short or long term success of a (software) project, product, or business.

Managing risk involves:

  • Identifying potential direct and indirect risks
  • Judging the Likelihood and potential Impact of a risk
  • Defining mitigation strategies to avoid/transfer, minimize/control, or accept/defer the risk
  • Monitoring/updating the risk

So, what are the risks on your project?  Can you think of two?  Twenty?  More?

How do you know you have them all? Regardless, do you have the time/money/ability to address them all?

So which do you really need to actively care about?  Which do you “eat-healthy-and-exercise” for?  Which do you have to plan for just in case?  And, which do you … just ignore?

We Have a List of Risks, Now What?

Let’s write our risks down in a central place so that:

  • We can append new risks while not needing to actively remember the risks identified previously
  • We can quantify each with a relative priority or ranking (Risk-Value) agreed between the stakeholders for the current context (eg: subject to revision and re-ranking)
  • We can attach a mitigation and/or contingency strategy to each

We could start with a simple table like the following:

Simple Risk Registry

In the above table, the Risk-Value is a function of the Impact and the Likelihood of a risk, where Likelihood and Impact could be rated by the following:

Likelihood and Impact Ratings

At another point in the project, we can get fancier with a detailed risk registry to track more information including costs for contingency and mitigation, progress of mitigation strategies, post-mitigation Risk-Value targets, trend of total project Risk-Value vs goals, etc.

But right now, we are primarily concerned with identifying the Project-Killer risks (if there are any) so we can doom-say with supporting data in hand, and get the attention/funding for mitigation strategies and contingency plans.

The “Trick”

In the face of a large number of risks, a visualization tool is helpful.  We need to be able to see the leading risks, ideally clearly separated from the rest of the pack.

For the purposes of this illustration, let’s make the assumption that Likelihood and Impact have equal weight for our project, eg: the very likely occurrence of an inconvenient impact is as important as an unlikely occurrence of severe impact for a given scenario.

Then, let us consider the following formula using 100 randomly generated value pairs for L and I: Rv = (L + I)

Risk Value = Likelihood + Impact

A pretty standard distribution from randomly generated input values with a linear relationship.  This formula is fine for ranking the risks by our above assumption and we can make a “Top Ten” list easily enough.

But can we say that the 11th risk is significantly different from the 10th?  And so, would it be correct to not spend any time/$$$ mitigating that risk?  Or the 12th or 13th, for that matter?

Let’s try applying the following formula to the same input data: Rv = (I2 + L2)

Risk Value = SQ(Likelihood) + SQ(Impact)

This time, we get to see both the ranking per the assumption we made AND we get to see groups of risks cluster or separate out from each other as the formula works to emphasize or de-emphasize the combined input values respectively.

Our Top Ten Risks List is now much more obviously a “Red Risks List”.  We can more easily see where we should first look to spend our limited time and $$$ to actively work on improving the situation, where an adjusted plan to “eat-healthy and exercise” could potentially have a wide benefits, where to plan contingencies rather than mitigation, etc.

Notes:

  • Learning to identify risks and judge their impacts and likelihoods relative to each other, the project, and the outside world is an acquired skill.  Practice.
  • When ranking risks, thinking short term vs. long term can change the rating.  Consider both.
  • Adjust the formula to best fit your industry/business regarding the relative importance of Likelihood and Impact.
  • Add a graph like this to your project dashboard, where you can drill-down to the groups and individual risks.
  • This is just one view on the data.  For example, another important view would be show which risks have a high Likelihood OR high Impact regardless of the other value in the pair.

Conclusion

Do we need to care about every big and little thing that is spinning about in space?  What is really the risk?

We have our atmospheric shield that protects us from the vast majority of impacts with no effort from ourselves.  The ones that do get through do less damage than putting a hole through a roof, typically…

We need to gauge both the Likelihood and the Impact of converging orbits with each NEO.  NASA uses the Torino Impact Hazard Scale to categorize the overall risk for each NEO as: No Hazard, Normal, Meriting Attention by Astronomers, Threatening, or Certain Collisions

Viewing the list of currently tracked NEOs against this hazard scale on NASA’s Sentry Risk Table, things suddenly don’t seem so concerning anymore (undetected objects aside).  But if there was something to worry about, it would show up clearly as a glaring red row in amongst the blues, whites, and rare green – demanding attention.  And then, I think we will suddenly care a lot.

Similarly, on your software project, not every risk is as important as another.  However, until you have a list of risks that was contributed to by all stakeholders, how can you say with confidence which are which?

Don’t leave it to someone else to identify and rank the risks to software quality and to the success of the test effort on your project.  Make sure that you are included and are participating – or your dooms might not get said.

And if, together, you can support your concerns around software quality with a data-driven approach, they will be less-likely to be “tuned out”.

All the risks can then be assessed objectively against each other – thereby aiding conscious, intelligent decision-making regarding which are the “Red Risks” and how ultimately testing can best help with mitigation – it might not be how you first thought it should.

 

Posted in  All, Planning for Quality, Risk & Testing | Tagged , , , , | Comments Off on Risk Clustering – Project-Killing Risks of Doom

Avoiding Common Test Automation Pitfalls

In an age where many organizations are under pressure to accelerate their software delivery to customers, test automation is becoming a necessity.  There are many reasons why it’s advantageous to implement a test automation solution as part of your testing strategy: it can help shorten development cycles, it can help increase test coverage and it can increase the speed and frequency at which some tests can be executed thereby providing rapid feedback.  While these benefits can be realized, successful test automation in practice is challenging to achieve.

Every software project is different and, as such, every automation solution has to be tailored to your unique situation.  This means that what may have worked in some cases may not necessarily work in your circumstance. Without attempting to list every challenge you may have to overcome, this article will focus on some of the most common pitfalls that need to be navigated in order to increase the chances of success in your test automation effort.

Having unrealistic expectations
A common misconception within organizations is that test automation is the “silver bullet” for improving quality, reducing testing efforts and reducing time to market.  While automated testing can have positive impacts in these areas, setting unrealistic expectations is often the most common pitfall leading to project failure.  Having expectations regarding the return on investment (ROI) without performing an informed analysis of the benefits that automation can bring to your project can lead to stakeholder dissatisfaction.  Additionally, placing unrealistic expectations on what can be achieved by the automation tool is a common pitfall.  Prior to the start of any test automation effort, it is important to evaluate one or more toolsets to understand how they will interact with the application under test (AUT).

The lack of clear objectives
Starting an automation project without clear objectives and a prepared plan is like starting out on a road trip with no idea why you are going, where you are going or how you will get there.  This may seem obvious, yet it’s surprising how many automation efforts begin without any clearly defined objectives that answer the questions: “Why are we undertaking this automation effort?” and “What do we hope to achieve?”.  A well-defined objective should be realistic, achievable and measurable.  An example of clear objectives would be to “test on different operating systems”, “reduce time to market” or “increase confidence of each build”.  Having realistic goals that are well defined will be easier to measure and, in turn, have a higher chance of being achieved.

The lack of a plan
When starting a new project, we often have the tendency to want to jump in immediately with the initial setup and coding activities while ignoring the necessary planning activities.  The test automation plan and strategy documents frequently get neglected in the efforts of saving time or because there is no perceived value in writing them.  The risks that result from not having a plan include the potential for scope creep, ambiguity on entry/exit criteria for various phases and budget or time overruns.  It’s no surprise that test automation efforts are more likely to succeed when they are well planned.

The plan should include the scope of the automation project and realistic expectations that can be achieved within the specified time frame.  The plan should also help answer the following questions:   “Which areas of the application under test will be automated?”, “What does the framework architecture look like?”, “What is the scope of testing?”, “What will be delivered and when?” and “Do we need any specialized skills?”.  The best plan is only a guide or rough blueprint as one cannot predict all contingencies or changes that arise from the many moving parts within software development.  It is for that reason that the plan should be treated as an active document that is continuously updated as the project changes.

The lack of a framework
A good automation architecture or framework is what gives automation its real power and value.  A framework outlines an overall structure for the various parts of the automation solution.  Having a framework is not the solution in itself; it has to be designed for maintainability in order to provide value.  If the maintenance is cumbersome, the updates to the framework or specific tests will not occur whenever changes are needed and this often results in a failed solution.

A well-designed framework needs to be built on top of the automation tool in order for the solution to be robust, well structured, reusable and modular while employing abstraction.  Separating the test data, reporting libraries, tool-specifics, utility libraries, domain-specifics, scripted tests, page models and object repositories are some of the levels of abstraction that will allow you to build reusable and maintainable code.  This will ensure that the framework is maintainable for future updates which ultimately will allow any member of your team to easily identify what needs to be revised based on the changes in the AUT.  Investing the time to create an automation architecture plan which outlines the design of the framework, independent of the test automation plan discussed earlier, will contribute to a successful automation effort.

The lack of early deliverables
Management support for the test automation effort is imperative for its success due to the time, effort and costs associated with automation.  As such, any early feedback that demonstrates the return on investment is always needed.  The development effort should be broken down into bite size deliverables that provide early ROI rather than attempting to build the full library of reusable scripts at the onset of the project.  Achieving early wins with the delivery of meaningful results will help build confidence in your team and boost management support for the initiative.  One successful strategy is to deliver a small subset of tests (such as smoke tests) early, prior to moving on to various modules in the full regression suite.  This will also allow you to receive feedback on the development activities and, at the same time, demonstrate the usefulness of test automation and give your team the opportunity to provide value early into the project.

The benefits of improving quality, reducing testing efforts and reducing time to market are very much achievable with test automation.  Having clear objectives, realistic expectations and an automation strategy and plan in place that outlines how you will deliver a maintainable framework will increase your chances of success.  This demonstrates the importance of taking the time to provide considerable thought and planning around the automation solution and process prior to commencement of the project.  In addition, learning from your past experiences and those of others will help you navigate through various challenges and steer the automation project to success.

Posted in  All, Automation & Tools, Planning for Quality | Tagged , , | Comments Off on Avoiding Common Test Automation Pitfalls

Creating Success in a Distributed Agile Team

Agile development methodologies focus on highly collaborative team environments that are continuously planning, developing, testing, integrating and delivering functional software. In such environments, constant feedback and communication are critical and there are small margins for errors and delays. In the ideal scenario, an Agile team would be collocated to allow for face-to-face interactions between all team members, because this is historically thought to be the most efficient and effective method of conveying information. However, in today’s global marketplace, collocation is not always possible and this is the situation I have found myself in as a tester on an Agile team that spans two coasts and operates in three different time zones.

Here are some of the things I found were important to consider while working on a distributed Agile team:

  • Being mindful when choosing resources. Not everyone is cut out for the rapid pace or the collaboration requirements of a distributed Agile team. Flexibility and the ability to multi-task are critical to the continuous delivery of Agile development and people who are overwhelmed by multiple, often competing, tasks could flounder. Resources should also be excellent communicators who are able to speak confidently and without hesitation. They should be comfortable with raising a hand when something is unclear and willing to accept help instead of trying to figure out everything on their own.
  • Strong leadership is crucial. Every team needs good management but with the multiple inherent challenges associated with distributed Agile teams, it’s especially important. Schedules need to be well communicated and maintained. Tasks need to be clearly outlined, assigned and followed up on. Team members must be engaged and communication barriers should not exist between team members. The most important task for an Agile leader is removing bottlenecks and roadblocks to keep the path clear for all team members.  These are all typically tasks that belong to a project manager, but might also fall on a development or testing lead.
  • Respecting time differences. Agile methods make use of frequent team meetings for daily check-ins, requirement refinement, client demos and retrospectives. Having all team members present at these meetings is important to eliminate the need for reiterating important information, which can quickly consume productive time. Maximum team participation is also vital to ensuring the team shares a sense of purpose and the same level of commitment. Creating a schedule that is mindful of the different working hours of all team members can be tricky but, once in place, it is an invaluable asset.
  • Making use of technology. Teams that share a work space make use of face-to-face interactions, white boards and sticky notes, but there are many collaboration tools available that distributed teams can use for communicating. Skype and Google Hangouts are easy ways to stay in touch and also allow for group chats.  Group chats enable everyone to see what questions are being asked and what answers are given, ensuring the team is all on the same page. Tools like Google Drive make it easy to share documents and files that need to be updated by multiple team members. Screen sharing and screen capturing tools make it easy to communicate complicated functionality and bugs. There are many innovative online tools out there for easy collaboration, which one is used is not important. The key is to find the tools that work best in your situation and make using them a habit.
  • Removing roadblocks. Making sure team members have access to all the tools they need seems like an obvious component but this could be anything from having the right login credentials for a collaboration tool, to having admin access to back-end systems. Often, as a tester who is in a differing time zone than the development team, I have found myself waiting for someone to make a small configuration change to my environment. Once I pointed out how easily I could make the change myself, I was given access. Having team members who can perform certain cross-functional tasks can be an easy way to save unnecessary downtime.
  • Creating a strong team dynamic. Ensuring there is not an “Us vs. Them” mindset is imperative to the success of any team. It could be on-site members vs. off-site members, or developers vs. testers, but any mindset that pits a team member against another could cripple the effectiveness of the team.  Team members should be encouraged to get to know each other to better understand the challenges each are facing. Creating a culture of inclusion and trust between all team members, regardless of role or location, will improve team cohesion, which will in turn improve the effectiveness of the entire team.

With the prevalence of both distributed teams and Agile development methods on the rise, we will inevitably run into more and more distributed Agile teams.  While it can be challenging environment, it’s definitely not impossible to overcome the perceived obstacles. It just requires more careful considerations and critical thinking. With the right people, the right mindset and the right planning, a distributed Agile team can definitely be successful.

Posted in  All, Agile Testing, Team Building | Tagged , , , , | Comments Off on Creating Success in a Distributed Agile Team

Write a Test Strategy – What Choice(s) Do I Have?

Project Manager: “Your test strategy looks good.”    
    Test Lead: “Great. I will get started on the next steps. Do you have the finalized delivery schedule?”
Project Manager: “I will send it to you shortly.”    

What is a Test Strategy?

A test strategy is the top-level artifact that gives visibility to how testing will be approached for a given project, as agreed to by the stakeholders, typically addressing:

  • Purpose / scope
  • Quality criteria
  • Assumptions & constraints
  • Test approach
  • Inclusions / exclusions
  • Types of testing
  • Issues & risks
    Test Lead: “I don’t believe this. We are not going to have any time to test properly!”
Colleague: “Your new project?”    
    Test Lead: “Yes. My test strategy needs 3x what testing is now scheduled for. I can maybe cut a few things…I know I can’t just ask for more time or people…but… Oh, it’s so frustrating.”
Colleague: “Sounds like you have a challenge for sure. I agree you can’t just say you won’t have time to test everything. Have you included different test techniques and minimized their coverage overlap?”    
    Test Lead: “Yes. I can look at it again and maybe thin it down a bit more. But I don’t know if I can get it to fit. I need the PM to understand we can’t just not test some things.”

Quality Criteria and Acceptance

Quality criteria are those things that must be true for acceptance to be possible. They can be derived from risks, scope, and organization standards (“quality bar”)

How do we know, or have confidence, that the quality criteria have been met for a given release? And, is 80% confident “good enough”? 90%? 95%? 99.99999%?

Colleague: “Why do we test at all?”    
    Test Lead: “To find all the bugs, of course.” <smiling>
Colleague: “What?” <laughing>
   
    Test Lead: “I know. When have we ever been able to do that, right?”
Colleague: “Yes. I like to think about it in terms of risk mitigation. This lets me propose options or choices for examination.”    

Testing Objective/Motivation/Priorities

As testing is virtually always constrained with not enough time, not enough people, not enough infrastructure, etc it is vital that the testing effort is able to prioritize what is critical, and what is not, so the scarce resources can be applied in the most valuable manner possible.

One of the crucial components to being “smart” in a constrained situation is to be able to correctly decide or select which things are “must-haves”, which are “should-haves”, and which are “nice-to-haves”.

Having a mission statement, such as the following, can help keep your focus on what is important and reveal what is less-so, and give visibility of the same to the rest of the organization:

“The mission of the project’s Test Team is to undertake such test-related activities within the constraints provided to them by the organization to maximize mitigation of the likelihood of a failure of the software system from occurring after deployment that would impact the business.” *

* This example is for an embedded test team on an enterprise scale project. Tailor your own mission statement to fit your team’s scope of responsibility for quality within your project and organization.

Colleague: “Are your budget and schedule already firmly set?”    
    Test Lead: “Pretty much I think. I could try to work with the PM on those though. I have before, when I had a good case.”
Colleague: “That’s good. Then you can put together 3-4 options to show the tradeoffs of sticking with the current schedule versus shaking things up a bit.”
   

Finding Your Best Option

In developing your test strategy, you might use a table like the following to compare the different flavours or options that you identify:

Comparison Criteria Option A Option B
Cost (during project)
Coverage (breadth & depth of testing)
Schedule required (elapsed time)
Resources required (avg / max burn rate)
Future re-use of test assets

Tailor the above list of comparison criteria to your needs, perhaps including others like: test auditability, repeatability of test steps, early involvement/feedback, manual vs. automated tests ratio, test preparation vs. test execution effort ratio, Total Cost of Quality (slide 5), etc.

Note: In creating your options, you are really trying to find the “best” option. Therefore, any initial option can be “looted” for pieces to be merged with others on the way to creating that final, agreed approach.

    Test Lead: “Thanks for letting me vent a bit. Now I’ve got to go crunch some numbers.”
Colleague: “No problem. Good luck and let me know if you want to do a walkthrough before presenting.”
   

Test Planning Framework

Consider the following steps in your test planning process:

  1. Review the project scope, delivery schedule, and constraints
  2. Determine if/how testing activities are able to assess and/or mitigate the risks (to quality, the technical solution, and the project itself)
  3. Capture the needs of the stakeholders in the testing effort
  4. Confirm the test artifacts required and the level of formality
  5. Identify available team members, tools, environments, etc.
  6. Outline reasonable options for the project’s test strategy
  7. Support the options with test effort estimates and test project schedule/resourcing data
  8. Present test strategy options for review and input by project team and stakeholders
  9. Iterate on feedback and converge to the selected/agreed approach

Gaining and maintaining this agreement between the stakeholders for the duration of the project (as things progress and change) will enable testing activities to be appropriately prioritized so as to successfully test the right things at the right time within the project constraints.

Summary

Test Strategy Co-dependenceThe Test Strategy, the Test Estimate, and the Test Project Plan are all co-dependent on each other which causes a cyclical dependency between the three artifacts. ie: a change in any one of the three artifacts should result in a review and possible update of the others.

And, when these three artifacts are driven by risk, the activities captured will be strategically prioritized, making it straightforward to decide what tasks could be most confidently dropped if there was a sudden need to do so.

For related reading, check out these articles:

Posted in  All, Estimation for Testing, Planning for Quality, Test Planning & Strategy | Tagged , , | Comments Off on Write a Test Strategy – What Choice(s) Do I Have?

Critical to Value: Strategic Metrics for Quality Management

The content in this post is based on the exceptional book from ASQ Press, Design for Six Sigma as Strategic Experimentation by H.E. Cook.  (ISBN 0-87389-645-9) pub. 2005.

While it is insightful for our profession to engage and embrace globalization, environmental sustainability, social responsibility, and the soft skills characteristic of advanced emotional quotient mentalities, it is imperative that we understand the purpose of our profession.  We must advocate and promote value and protect against losses.

Cook’s excellent reference identifies strategic metrics for what is termed as “Total Quality”, which aspires to represent quality as the net value to society (complementing Taguchi’s definition of Quality Loss as loss to society).  Where needed, I have augmented Cook’s list with additional metrics I have encountered in my professional activities.

Metrics Chart

Fundamental Metrics:
In a competitive environment, there are always pressures to increase the value, reduce the cost, and advance the pace of innovation.  In litigious situations, risk management is established and sustained through legal and regulatory compliance, contractual fulfillment, corrective and preventive measures.

  • Value to customer
  • Cost (variable, fixed, investment)
  • Pace of innovation
  • Legal and regulatory compliance
  • Contractual fulfillment
  • Corrective and preventive measures

Bottom-line Metrics:
The outcomes of these fundamental metrics will be realized in what Cook terms “Bottom-line” strategic metrics.  These transcend beyond the Quality function to influence financial and governance decisions.

  • Working capital
  • Market share
  • Price
  • Return on investment
  • Internal rate of return
  • Breakeven time
  • Legal charges and penalties
  • Inclusion to/exclusion from markets or industries
  • Contingency requirements (based on assessed risks)

Cook uses the term “Critical To Value” (CTV) as the key measure to forecast changes to cost, customer satisfaction, and working capital.  The attributes of products and services can be independently analyzed to determine the extent to which they are Critical to Value.  Cook segments CTV into four Value Curves.

Value Curve
  • Smaller is better (SIB)
  • Larger is better (LIB)
  • Nominal is best – with scaling factor (NIB1)
  • Nominal is best – no scaling factor (NIB2)

As part of each value curve, the analysis includes a baseline value (V0) and an ideal value (Vp). In the LIB case, the ideal specification is at infinity, reflecting the infinite possibilities for value.  These calculations are complicated by the presence of uncertainty and variation, which requires advanced mathematical formulae beyond the scope of this posting.

The high-level lesson is to understand how to establish the CTV measures for each attribute of a product or service, and translate those measures into tangible metrics.

Posted in  All, Planning for Quality | Tagged , , | Comments Off on Critical to Value: Strategic Metrics for Quality Management

UAT as a Gateway to Enterprise Agile

Let’s admit it. Projects that deliver enterprise solutions have barriers to being agile, especially those that are based on purchased packages (commercial off-the-shelf packages, or COTS). In my experience I’ve wrestled with at least two of those – the technical barrier of test automation capability and the organizational barrier of having a single Product Owner that truly speaks for the whole of the enterprise.

In this post, ‘user acceptance testing’ (UAT) represents that last-gasp critical-path test activity that enterprise projects typically run just before going live. It is generally an entry condition for migrating the solution into the production environments. The theory is generally that the ‘business unit’ needs to understand and approve of what they will be using post go-live and that this is the last time to check that the combined new/revised/existing business processes are supported by the new solution, and vice-versa as the case may be.

The ‘test last’ bit and acceptance not being done during the delivery sprint(s) doesn’t feel ‘agile’. So then how could user acceptance testing be the gateway to agile for enterprise solution teams besides the general observation that a UAT window in an enterprise project is usually 2-4 weeks, that is, the same duration you might expect of a typical scrum project.

Gateway Characteristic #1: Whole-team Approach

Briefly, a UAT phase run in a critical-path manner is an ‘all hands on deck’ phase. The project or test manager usually rallies the team around the UAT cause in some sort of a launch meeting, where the people that are doing the testing meet the people that are supporting them doing the testing and everyone in between. Everyone participates in this phase – trainers, the business units that will receive/use the new solution, IT personnel that will support the solution post go-live, etc.

A wise test manager will in fact plan UAT this way as well by hosting workshops with the business to identify and otherwise order the business process scenarios that would be in-scope of the UAT. To avoid creating test scripts that repeat the content of either training material or business process descriptions, the intent of these workshops is to identify the testing tasks/activities that need to be done and in what order they should be done in for maximum effect.

Does that sound a bit like sprint planning to you?

The whole team also comes into play in the problem/issue workflow, where there is generally a daily triage of discovered problems that need to be described, classified for severity, and assigned to a resolution provider. Generally everyone participates in these triage sessions for cross-communications purposes. The problems are not to be resolved in this meeting, just raised, classified and then picked up by someone that can do something about it.

Could this meeting be extended by 15 minutes so that everyone also mentions what they tested yesterday and what they will test today, that is, in addition to what problems/barriers/issues they are running into?

Last point in this section – most of the companies that I’ve managed UAT on behalf of establish a “UAT testing room” that is a co-opted training facility or meeting room so that all the business testers can be co-located when they are actively testing. Hmmm. Sounds like what we generally arrange for an agile team.

Gateway Characteristic #2: Intensive Business Involvement and Leadership

It is common to warrant that business involvement and leadership exists by getting the UAT plan signed-off before the phase and a report of the results signed-off afterwards. That might be necessary, but it’s not sufficient for any UAT. It is critical that there be business people actively testing – in fact they are the only ones that can perform the testing and acceptance according to the generally-accepted governance umbrella that the UAT operates under. Since there are always problems detected in UAT, the business leadership needs to pay attention and get involved when their people might get discouraged because of what they find or believe they are finding, as the case may be.

It’s a nervous time for the business people involved in testing and for their leadership to dump all the responsibility and accountability onto them for making the acceptance decision is counter-productive and harmful. First, these aren’t professional testers – they are people from the business units seconded to do this testing and second – they might not even like testing and third – they might still have to do their regular jobs during the testing. The right mix of leadership from the business unit and from the project team implementing the solution is required to support these individuals.

The common characteristic is that both agile projects and UAT require that business leadership touch to be successful.

Gateway Characteristic #3: There is a Backlog

Most of the time – but I suppose not all the time – there is a list of problems/defects that were discovered in earlier test phases that will be resolved at some point during UAT. Those problems are not severe enough to have prevented UAT from starting, but at the same time, they do still need to be resolved. In addition, the UAT test plan lists business scenarios that need to be run over the course of UAT. These two lists are combined and generally otherwise ordered so that the time spent in UAT is efficient and effective.

This is a backlog and I’ve seen many teams successfully use burn-down/burn-up charts to communicate their test progress. Information radiators such as dashboards are commonly posted to either project room walls, project repositories, or both. Discovered problems/issues are incorporated into the backlog using a process similar to the way that an agile team deals with problems/issues discovered within a sprint.

Gateway Characteristic #4: The Backlog is Ordered

When a problem/issue is discovered and goes through triage, it’s been ordered along with everything else that has to be done within the planned UAT window. The agile characteristic of this is that everyone has participated in the prioritization – business and technical personnel together make these decisions.

Gateway Characteristic #5: Business Acceptance of Detected Problems Happens Within the Phase

Detected problems go through their entire life cycle within the UAT window. They are identified, triaged, fixed, re-tested, resolved and closed before the phase ends. If not, then  the problem is serious enough to either extend UAT or to stop it outright and halt the go-live. Consequently, there is enormous effort put in to make sure that there are adequate resources on call to get any problem resolved – training materials, software, business process, environment – any root cause whatsoever will generally get all the attention it needs to be taken care of.

As a gateway characteristic, this is exactly what the business people involved in an agile sprint are asked to do – identify what they need, compare what they are given to what they need, back-and-forth, reach consensus on what the final product is, accept it and move on to the next item. Business units therefore DO have experience with this kind of acceptance regimen if they’ve been through a UAT.

Gateway characteristic #6: The Result is a Potentially-shippable Version of the Solution

At the end of both a sprint and a UAT phase, there is a potentially-shippable version of the solution.

Concluding Thought Experiment

Generally a COTS package is selected after some period of analysis and review – a gap analysis per se – and then the implementation is planned based on closing those gaps. What if the needs and gaps were planned to be closed in a series of say 10 UAT-like phases? The two highest barriers remain the test automation problem – since many of these products do not automate well – and the difficulty filling the Product Owner role in large projects.

Posted in  All, Agile Testing, Planning for Quality, Test Planning & Strategy | Tagged , , , , , | Leave a comment

Software Developers and Testers: Friends or Foes?

I have often observed situations where developers and testers working on the same project seem to be in a constant struggle with each other. But why – why are the two different roles so prone to working against each other rather than working together? My personal view on the subject has actually changed over the years. In this article, I’d like to share my thoughts in the form of a story that I’m sure everyone who has been working in the software industry can relate to. The story is a brief overview of my time in software development, highlighting on events that have contributed to molding my view on how teams should interact with each other to allow the highest probability of success. I’ll share some of the key items that I have taken away from my experiences thus far, and hopefully they will provide tools that both developers and testers can use to positively affect their work environments and the way in which software is delivered.

Over the past 18 years, I’ve been involved in small and large scale software projects: starting as a tester, moving on to a developer, off to team lead and so on. Through this progression, I have been able to look at the implementation of new software systems from many different views. Most importantly, I have been fortunate enough to experience software development both as a developer and as a tester. In the beginning of my career, when I was working as a tester, I could not believe that the developers were even trying to write working code. If they were, how could they have possibly thought that they had performed well in their roles? Moving on to development, I was astounded by how the test team would try to tell me how I should fix a defect and, even worse, that they thought some of the items they found even qualified as defects. There was a lot of frustration on both sides and, as frustration built up and deadlines loomed as they always do, it often escalated to a point where there were open conflicts.

Why would I want to do this over and over again? At the time, I was not sure. I pushed through and, throughout the years, I worked on both good and bad projects. I didn’t really know why some projects were good and some bad, or what criteria I was even assessing them on. They were all delivered, and customers were happy, yet some were still better than others. One of those “good” projects occurred just over a decade ago. It is only looking back at that project now that I can say it was the moment where my views on the roles of tester and developer changed. After all, keep in mind, I was a tester who criticized developers, and I was a developer who criticized testers.

On this particular project, I was one of two lead developers. As most projects do, we had a development team and a test team. Same old stuff, I had seen it many times before. But something was different on this particular project. Please don’t get me wrong; the project was not perfect. Like with any project, all the team members didn’t get along at all times. The biggest difference that I know now existed on that project compared to the previous projects I had been on, was that everyone was treated as equals and, as a result, respected each other. During good times, we celebrated the successes of each other. During the bad times, we were open and honest and picked up our fellow teammates to push through. All the while, there was no judging and no accusing – just a group of people working together, each using their individual strengths for the best of the team. The developers didn’t get upset by a defect being logged, instead they looked at it as someone helping them to make the product better. The testers understood the complexities that the developers were facing and were simply informing the developers of discrepancies between the product and the requirements. It was a wonderful experience, not because we were successful in delivering the product (which we were), but because we were all in it together and all took responsibility for the project. We looked out for each other and cared about all team members. Successful delivery was never even a question in that environment. Everyone just took that part for granted. What we did not take for granted was each other.

In the title, I asked the question “Friends of Foes?” Hopefully, you were able to draw the conclusion that I believe that with those two roles on a project – developer and tester – being friends will provide the best outcome and the most favourable view of your career as you look back. I described my sample project as one that had a development team and a test team, but in truth it was one team: a project team that consisted of development professionals and test professionals. It is up to each and every team member to work to make it a single united team; it won’t just happen, but it is a rewarding experience when it does.

To sum up, I want to leave you with a few lessons that I took away from that experience that may hopefully help you move forward in the creation of that one team approach to software development projects. I believe that these hold true regardless of the role on your particular project. As a disclaimer prior to continuing, please note that these will all seem like very simple things – common sense even. One key lesson that I have learned over the years is that what appears common sense is almost never seen to be common practice.

  • Remember that everyone on the team has the same goal.
  • Do not attempt to perform the roles of other individuals, or tell them how to perform their role.
  • Trust your team members and value what they do.
  • Learn what others on the team do to better appreciate the value they bring – all team members bring value.
  • Place yourself in somebody else’s position to better understand their struggles and see how you can help them.

And finally, I believe that the most important lesson is…

  • Respect everyone on the team.
Posted in  All, Team Building | Tagged , , | Comments Off on Software Developers and Testers: Friends or Foes?

Into The Unknown: Entering The World Of Software Testing

I recently joined PQA as a quality detective and, being a novice tester who has just entered the field of testing, I would like to share some of my thoughts based on my experience from the past couple of weeks. I have organized my ideas into this question and answer article. Of course, the list of questions I came up with is far from complete; however, I hope this short article at least offers an interesting leisure time reading about software testing from the perspective of a new tester.

Does the “real” world of software testing correspond to your expectations, or what you thought it would be?

I am currently working as a software tester focused on testing webpages and digital content. Although I studied electrical engineering, and did have some limited exposure to software testing through previous research projects, I really did not know that there were so many things associated with software testing. Like many other people, prior to entering this field, my view of software testing or quality assurance was that it simply consisted of debugging a piece of code and performing some scripted testing tasks that ensured the piece of code worked. Now I understand that tasks such as this are really the responsibility of a software developer. In the “real” world, software testing means much more. I think one of the reasons that the role of a software tester is being viewed in this way is that the industry itself has not really defined the standard of becoming a good tester. People are not trained in school to become software testers after graduation. Software testing requires a broad knowledge base and years of experience and practice. It is easy to produce a bug, but it can be very hard to catch it. In reality, a software tester is a critical thinker. Because a piece of software is an intangible product, a software tester needs to be very familiar with not only the function of the software but also its business requirements. While ensuring all functions work properly, a software tester needs to think beyond the software he/she is testing. He/she needs to consider how this piece of software is used in the “real” world and come up with testing scenarios that reproduce those situations. There are many techniques, for example exploratory testing, that can help testers achieve this. Last but not least, I was surprised to find out that soft skills, such as communication and interpersonal skills, are in some ways more important than the technical skills to a tester, especially for a service company like PQA. Although there are many challenges ahead, I believe I will enjoy this continuous learning experience.

Is what you’ve done so far more or less technical than you expected?

Currently, most of my work is about testing websites. Although the testing itself does not require a lot of technical knowledge, it does, however, need the tester to have great attention to detail. Since I am new to the field of software testing, the tasks that have been assigned to me are less technical than I expected. However, while I perform daily testing tasks, I am also learning different testing techniques and testing tools, such as xBTM, exploratory testing and agile testing. I am working to ensure quality in the role I perform so that I will be in a better position to contribute to improving the quality of a product in the future.

How do you see the relationship between developers and testers?

Developers and testers should be working as a team. All members of the team need to have a clear understanding of the product and their responsibilities. Everyone should work toward a common goal, which is to improve the quality of a software product.

What “bugs” or issues have you encountered yourself when using software?

Some webpages tend to have a lot of Flash animation contents. This often slows down the speed of a browser and, in many cases, it causes the browser to crash. Technically, it is not a bug from the perspective of the website design. However, in reality, it does result in unhappy user experiences. For me personally, such websites really give me a hard time when browsing through its content, and I would avoid visiting the site again in the future if possible. Thus, it’s a tester’s responsibility to consider all the scenarios that a product would be used for in the “real” world. Another issue that I have encountered was related to memory leak. Some computer games run very smoothly at first; however, over the time, they start to take over too much system memory without dynamically giving back the unused ones. I found this very annoying because it slows down the system and makes the gaming experience quite unpleasant.

How are some of your newly acquired skills and knowledge applied in your day to day living, and what things are you doing to become a better tester?

I believe all the skills I have acquired as a tester are transferable to my day to day living. A tester always looks for things that can be improved. I am able to use some of the testing tools to become a better organizer and planner in my life. For example, I can identify my goals in life and outline the steps to achieve them. Currently I am still in the learning stage, and I want to sharpen my testing skills so that I can become an outstanding service deliverer in the near future.

Posted in  All, Other, Team Building | Tagged , | Comments Off on Into The Unknown: Entering The World Of Software Testing

The Science of Testing

The best way to approach a problem is typically to look at it from different angles, to turn it over and to discuss it until a solution can be found.  Similarly, it is important to try to bring different perspectives into your work to develop your skills and extend your toolbox.  This article explores the parallels between software testing and science, and highlights what testers can learn from the scientific method.

What is the Scientific Method?

The ultimate goal of all sciences is knowledge, and to acquire new knowledge, scientists make observations and analyze data – activities we normally refer to as research.  The scientific method is simply a collection of techniques that scientists use to investigate phenomena and gain new information.  For a method to be considered scientific, it must be based on gathering empirical evidence.  Empirical means acquired through observation or experimentation – making claims without experimental evidence is science fiction, not science.

Here, we can already draw our first parallel to testing.  We test software to try to learn how it works; like a researcher, our goal is to gain new knowledge.  If we already knew everything about the software, there would be no reason to test it!  When we test software, we are in fact experimenting and observing the results.  Testing is simply gathering empirical evidence.  Hence, we can draw the logical conclusion that good testing adheres to the scientific method!

Simplified, the scientific method involves the following workflow:

1.  Collect data through observation
2.  Propose a hypothesis and make predictions based on that hypothesis
3.  Run experiments to corroborate the hypothesis

If the experiments corroborate the hypothesis, additional predictions can then be made and tested.  If the experiments instead refute the hypothesis, it is necessary to go back and propose a new hypothesis, given the additional knowledge gained from the experiment.

A trivial example would be:

1.  We observe a mouse eating cheddar cheese.
2.  Based on this observation, we propose the hypothesis that our mouse will eat all sorts of cheese and, in particular, we predict that our mouse will also eat Swiss cheese.
3.  We give our mouse Swiss cheese and eagerly wait to see if the cheese will be eaten.

If the mouse eats the Swiss cheese, our hypothesis has been corroborated and we can predict other consequences, for example, that the mouse will also eat goat cheese.  If the mouse does not eat the Swiss cheese, we have to go back and suggest a new hypothesis.  Maybe the mouse only likes cheese without holes in it?

The scientific method is cyclic and dynamic; it involves continuous revision and improvement.  Based on observations, a hypothesis is proposed and the consequences of that hypothesis are predicted.  Experiments are setup and run to test the hypothesis, and the results are evaluated and used to propose a revised – and improved – hypothesis.

cycle diagram

Figure 1: The scientific method.  Based on observations, a hypothesis is proposed and a prediction is made.  Experiments are setup and run to test the hypothesis, and the results are evaluated and used to propose a revised hypothesis and so on.

How does the scientific method apply to software testing?  Let’s assume we are testing a simple text editor.  The workflow of the scientific method maps to software testing as follows:

1.  Learn the product and observe how it behaves
2.  Identify risks and predict potential failures
3.  Execute tests to reveal failures

Based on the results of our tests, we will identify other risks and use this new knowledge to design additional tests.  The process of testing a software product has striking similarities with the workflow typically adopted by scientists. But does software testing enjoy the same level of credibility as science?

Credibility

The word science comes from the Latin scientia, meaning knowledge, and when we talk about science, we refer to a systematic search for, and organization of, knowledge.  Typically, the word science is associated with authority, expertise and – last but certainly not least – credibility.

Whether we find something credible or not depends on:

1.  What we know about the issue – evidence
2.  How compatible something is to our own world view
3.  Reliability of the source
4.  What are the consequences of accepting or refuting the issue

We are often more likely to believe statements if we have little knowledge of the topic – we simply do not have any counter evidence.  Some sources are also seen as more credible than others; if we read something in the morning paper, we are more likely to believe it than if it is posted on Facebook.

In testing, what we know about the issue equates to our test result.  How compatible the behaviour of a piece of software is with our prior experiences has an impact on our expected test result, and therefore might make us biased.  The reliability of the test result depends on who tested the product and how it was tested.  Finally, there may be direct consequences of reporting or rejecting any particular bug, which could affect our objectivity.

Is the testing done at your workplace seen as credible, and who assesses that credibility?  The first step in increasing test credibility is to look at the factors that will raise the likelihood that we will believe in both the process employed and the actual result obtained.

The Science of Testing

What characterises science is that it aims at falsifiable claims, whereas pseudo-science, or non-science, typically makes claims that cannot be verified.  Here, we can draw a second parallel between science and testing.  Software testing that embraces the scientific method tries to find ways in which the software fails rather than trying to prove that the software works.  How do we recognise the testing equivalent of pseudo-science? How can we protect ourselves, as testers, from being fooled by false claims? The best way is to nurture our inner scientist, and strive for an unbiased and reflective approach to testing.

Posted in  All, Planning for Quality, Test Planning & Strategy | Tagged , , | Comments Off on The Science of Testing