It’s Always Risky Business with Web Applications

There is a sign on the front door of our office building. It’s been there all winter. It says something like this: “Would the last person leaving the building please make sure the doors are locked? They sometimes stick”. Sometimes it seems like web application security is handled the same way. I can almost picture a project manager standing up in a team meeting and saying, to nobody in particular, something like this: “We know we may have a security problem with our site, could somebody please check it before we go live?”

Losing your customers’ data is not only embarrassing, it can also be expensive. Both in terms of direct financial penalties and damage to the reputation of the business.

What Are Web Application Vulnerabilities Anyway?

A vulnerability is a weakness that allows an attacker to compromise the availability, confidentiality or integrity of a computer system. Vulnerabilities may be the result of a programming error or a flaw in the application design that will affect security.

Attackers will typically target every spot in an application that accepts user input and systematically attempt to discover and exploit these weaknesses.  It is important to note that there is a distinct difference between the discovery of a web application vulnerability and the subsequent exploit. Often, a combination of different, seemingly harmless, vulnerabilities will be chained together by a creative attacker resulting in a major security breach. Therefore, security testing experts usually focus on identifying and demonstrating the individual vulnerabilities, rather than attempting a specific exploit. For example, a security analyst may identify that an application has both cross site scripting (XSS) and cross site request forgery (CSRF) vulnerabilities rather than demonstrating that these can be combined to steal a user’s confidential information.

The bad news is that the majority of web applications deployed today have these types of vulnerabilities. Statistics published by web security companies and organizations suggest that up to 96% of custom web applications have a least one serious vulnerability. My personal experience is that 100% of the web applications I have analyzed have multiple serious security vulnerabilities. The good news is that organizations like the Open Web Application Security Project (OWASP) are working to raise awareness and provide resources to make the web a safer place.

“OWASP is an open community dedicated to enabling organizations to conceive, develop, acquire, operate, and maintain applications that can be trusted”1

The OWASP Application Security Verification Standard (ASVS) provides a comprehensive basis for testing web applications. It provides developers with a tool they can use to measure the degree of trust that can be placed in their applications. The ASVS can also be used by the security test team to develop a test plan that addresses the security requirements of a web application. OWASP also periodically publishes a list of the top ten web vulnerabilities.

top ten web vulnerabilitiesSource: https://www.owasp.org/index.php/Top_10_2013-Top_10

To one degree or another everyone associated with a web application development project should have a solid understanding of the OWASP Top Ten.

Whose Job Is It To Find These Vulnerabilities?

Developers?
A lot of developers tend to follow “best practices” and rely on the “baked-in security features” in their framework of choice. The problem is that sometimes business requirements or other circumstances can push the developer outside of these “best practices”. When that happens, all bets are off. A developer that is not aware of the implications of the vulnerabilities in OWASP Top Ten can, and probably will, introduce security flaws into the application. It also goes without saying that if your developers are not aware or trained in the issues then they certainly are not going to think to test for them.

The QA Team?
Traditional functional testing or performance testing is typically focused on verifying that the application under test meets the functional and non-functional requirements defined by business. A quick glance at the OWASP Top Ten shows that these are not the types of requirements you will see on a typical test plan. QA usually focuses on making sure the application works as intended and all the test cases pass. Security testing is little different. You are trying to make sure that specific attacks do not succeed and all the test cases fail.

End Users?
That just leaves your users (and really – who leaves testing up to their users?). In the normal usage of an application, you will not be hearing from your users that you have security problems. The only people that will uncover these types of issues are those that have the training and motivation to find them. These people fall into two camps, white hats and black hats. The white hats will not try to break into your web application without explicit permission. You will not hear from the black hats until it is too late…

Where Do I Start?

Security is best approached as a combination of people, process and technology. Everybody involved in the development and testing of a web application needs to learn more and a great place to start is the OWASP web site. Developers can use the resources on the web site to develop more secure code by:

There is content on the OWASP web site for the QA team as well. They can:

Last and not least, consider hiring a security expert to verify your application is secure before it goes live. A security expert should have a strong development background, know how developers think and be able to think like an attacker. Having an expert examine your application is your last chance to ensure that the “doors are locked” before the bad guys start looking. By the way, I noticed today that the sign on the door is gone. The landlords hired a professional to fix them.

1https://www.owasp.org/index.php/About_The_Open_Web_Application_Security_Project

Posted in  All, Agile Testing, Planning for Quality, Risk & Testing | Tagged , , , , | Comments Off on It’s Always Risky Business with Web Applications

I Say “Quality”! You Say…?

Quality is one of those words that embodies so much and can mean such different things to different people.

high quality guarantee
Image credit: Pixelors.com

For instance:

  • Why do you have the car that you have?
  • Would you change it if you could (money is no object)?  Why?
  • What if you had to buy a car for a friend?
  • What if you had to choose a car for 100 people and you would be evaluated on their happiness or satisfaction with the car after 1 week?  Or after 1 year?  5 years?  15 years?
  • What if you manufactured cars for the mass market?  What would you ensure that your cars had so that potential buyers would perceive your cars to be “of quality”?  What would your company have to be like?

! If we were talking about movies instead of cars, would your answer to “what is quality” change?

What is Quality for Software Systems?

Strawberry is the answer!
Maybe.  Is it a gelato scenario?
What?  No, we are talking about frozen yogurt!
We have to go with blueberry then.

Discussions about what is quality in relation to software have been going on for a long time.  Some well-referenced definitions include:

  • “…conformance to requirements: meeting customer expectations, both stated and unstated.” – Philip Crosby, 1979
  • “…the degree to which a set of inherent characteristics fulfill requirements.” – PMI Project Management Body of Knowledge, 2008
  • “Fitness-for-use.” – Joseph Juran, 1974

However, quality has many aspects or dimensions to it.  For example: When we start to think about software and quality, we might think about concepts like capability/functionality, performance, usability, compatibility, maintainability, testability, etc.

These quality attributes/factors, or ‘ilities, are used to describe the different aspects of quality for a given software system.

From a customer satisfaction point of view:

“Satisfaction with the overall quality of the product and its specific dimensions is usually obtained through various methods of customer surveys.  For example, the specific parameters of customer satisfaction in software monitored by IBM include the CUPRIMDSO categories (capability, functionality, usability, performance, reliability, installability, maintainability, documentation / information, service, and overall); for Hewlett-Packard they are FURPS (functionality, usability, reliability, performance, and service).” – Stephen H. Kan, “Metrics and Models in Software Quality Engineering”, 2nd Edition.

Of course, for them to be useful we would like to be able to assess/measure these quality attributes/factors; to track and to compare.

Evaluating Software Quality

We need to improve quality.
Why do you say that?
Isn’t it obvious?

In order to rationally take steps to improve quality, we need to first understand which quality attributes are important to us.  And then, we need to be able to associate one or more ways to measure each quality attribute so as to be able to quantify it in an agreeable manner.

A first step is to take each quality attribute or quality factor, that is of interest to us, and describe them in terms of reasonable sub-factors.  The following diagram illustrates this decomposition for the quality model defined in ISO/IE25010:2011:

Product Quality Model ISO/IEC 25010
Image credit: “System Quality Requirement and Evaluation”, Kazuhiro Esaki

Next, we can define appropriate metrics (with their associated measures, indicators and thresholds) to evaluate/assess the degree to which our software system possesses each quality sub-factor (or quality factor in the case of direct metrics).

Quality Factors To Metrics Tree

This will provide us a model from which we can make more objective comparisons and judgements about the degree to which our system possesses a given quality attribute and, when the attributes are considered as a group, to what degree our system is “of quality”. (paradoxical caveat: at least insofar as we have defined quality through our selection of metrics)

Who Cares About Quality?

Hey I found a bug!
Oh?  Show me.
See?
Yeah…don’t worry about that.  It’s not important.
To who?

Before we set off to write up “What is Quality” and define the quality model for our organization or project, we should really involve some other people in our team (in the broadest sense).  Recall, quality can mean different things to different people, and that is largely because they hold different aspects of quality to be more or less important than another, given their context.

Consider: it would be a limited view indeed to simply consider the testers on the team as the only stakeholders in defining quality.  Likewise, when considering only the customer or end-user.

“Quality is everyone’s responsibility.” W. E. Deming

Stakeholder input should be gathered from:

  • Customer representatives
  • Technical “Peers” (architecture, business analysts, developers, testers, operations, support)
  • Management (project management, departmental management)
  • Business Organization (strategic planning, corporate management)

For each stakeholder identified from the groups above, it is important to understand their contextual interest in quality:

  • What are the needs of their role, the pressures or demands they face, and from whom?
  • What are their motivations or goals for being involved in the project?
  • How will they participate in the project?
  • How can understanding what is agreed to be quality help them?

It Just Got Real

Fast! Easy to use! Reliable!
How do I turn that into code?

We can then take our understanding of our stakeholders’ quality needs and purposefully build it into our software.  The following diagram illustrates translating the stakeholders’ needs for different aspects of quality into actual (eg: testable) functional and non-functional requirements for the software system.

Quality Requirement Definition Analysis ISO/IEC 25030
Image adapted from: “System Quality Requirement and Evaluation”, Kazuhiro Esaki

Turning It Around

We can’t be perfect.
We can improve.
But it costs too much.
What will it cost if we don’t?

Just as poor quality in a released product hurts market share, the lack of quality within each phase of the development lifecycle costs the company hard, quantifiable, dollars.

Quality In Product Lifecycle ISO/IEC 9126
Image credit: “ISO/IEC SQuaRE. The second generation of standards for software product quality”, Witold Suryn, Alain Abran

Identifying what can lead to poor quality downstream, lets us see what we need to change upstream in our processes to mitigate this risk, lower the Total Cost of Quality, and realize those savings and revenues.

Conclusion

“What is quality?” is a key part of the conversation regarding “what is success” on a given project, for a particular system, in a specific organization, for those users.

Having this question answered lets us plan and prioritize on our project at the beginning, throughout, and on-going for the life of the software system.

Make sure you have it.

 

Posted in  All, Planning for Quality | Tagged , , , | Comments Off on I Say “Quality”! You Say…?

Did Sesame Street Impact the Adoption of Agile Methodology?

History of Sesame Street

The first episode of Sesame Street aired in November 1969. It was aimed at preschoolers and was based on any number of research studies. ‘It wanted to be an educational program due to the common thought ‘at the time’ that television programs harmed the development of children.’1

Television was generally a half hour or hour long program in a ‘serial’ format. Watch the show this week, and tune in next week for a continuation of the storyline; this required the watcher to maintain a memory of the previous week’s episode. This was difficult for children to do. The alternative was a television program with ‘skits’ or short stories, but these were generally dull and bland, even those whose purpose was comedy and not specifically aimed at children.

Then, along came Sesame Street. It used commercial-like 12–90 second ‘shorts’ that employed repetition to reinforce the targeted concepts throughout the episode of the program. Sesame Street used catchy music with contemporary beats, humour, short bursts of action and strong images with bright colours. They used a mix of real people and cute, colourful puppets.

Each episode of Sesame Street was built like a magazine to appeal to the perceived short attention span of children. This format allowed the use of a mix of styles, story speed and multiple characters. With this format, children’s interest and attention was able to be sustained throughout the episode. Each episode was discreet and there was no requirement to remember the previous program. Learn about the letter ‘R’, count with The Count to 6, see Big Bird talking with Buffy, watch the Cookie Monster make a mess gobbling a bunch of cookies and see Bert and Ernie iron out a dispute; all delivered with fun music and awesome colour. This format allowed the program to be very fluid.

A child watching Sesame Street saw quickly executed, dynamic, bright, episodic ‘shorts’ that repeated a targeted concept in a fun, engaging way. Cute, brightly coloured puppet personalities interacted with real humans in a simplistic, honest way. All those interactions had a goal that was attained very quickly and, more often than not, successfully.

Agile Methodology

Agile software development is a development method that is based on team work that can quickly adapt to changes, works through Iterations and produces a useable product in a short time-boxed period of time.

Agile development methods2 evolved in the mid 1990’s as an answer to the overly cumbersome, and time consuming, Waterfall method and the Agile Manifesto3 was introduced in 2001. One could argue that Agile did not so much ‘evolve’ in the 1990’s but was ‘adopted’ as the climate was right for it. Since then, the Agile Movement has changed the landscape of software engineering and commercial software product development.

The Agile Manifesto is based on twelve principles:

  1. Customer satisfaction by rapid delivery of useful software
  2. Welcome changing requirements, even late in development
  3. Working software is delivered frequently (weeks rather than months)
  4. Working software is the principal measure of progress
  5. Sustainable development, able to maintain a constant pace
  6. Close, daily cooperation between business people and developers
  7. Face-to-face conversation is the best form of communication (co-location)
  8. Projects are built around motivated individuals, who should be trusted
  9. Continuous attention to technical excellence and good design
  10. Simplicity—the art of maximizing the amount of work not done—is essential
  11. Self-organizing teams
  12. Regular adaptation to changing circumstances

Agile methods uses ‘Pair programming’, collocation and communication as a means to build team work, collaboration and adaptability throughout the project.

Agile methods break tasks into small pieces with minimal planning; communication trumps documentation and the huge planning effort at the start of a project is removed. Iterations are short, typically lasting from one to four weeks.

Each Iteration involves a multi-functional team that may include planning, requirements analysis, design, coding, unit and acceptance testing. Each Iteration produces a working product. The project team may need to make rapid adaptations to changes in requirements, design, etc.

In Agile, there is usually a prominent status of process of the product readily available to the team. Daily stand-ups are common attributes of Agile. Team members report what they did, what they plan to do and any roadblocks.

Sesame Street and Agile Relationship

Sesame Street was aimed at preschoolers and first aired in late 1969. Thus, those watching Sesame Street would have birth dates later than 1964, assuming that ‘preschooler’ in 1969 meant those with an attained age younger than 5 years. Those same individuals would be 30 or younger in the year 1995, about the time that Agile was being re-introduced.

Given the number of Information Technology (IT) practitioners, one can make the inference that a number of those preschoolers, now in the post-secondary world, took Computer Science of some variety and joined the IT cadre sometime after 1984. They were then exposed to the Waterfall methodology that was in wide use at the time. Let’s take a look at the principles of Agile and how Sesame Street programming conditioned its young viewers to readily embrace and develop Agile Methods.

The ‘rapid delivery’ of Agile perfectly mimics those quick, rapidly executed ‘shorts’ from Sesame Street. Get it done with a positive outcome and then onto the next story or sprint.
The ‘shorts’ from Sesame Street were plentiful during the program; many little stories; a lot to watch and be engaged with. The frequent delivery of workable code, the result of an Iteration, is the same as those multiple shorts in the one hour Sesame Street program. During Sesame Street, there were a mix of slow stories and fast stories. Some Iterations are 2 weeks; some are 6.

Nothing in Sesame Street was set in a logical sequence. The Count followed by Oscar followed by Buffy followed by Big Bird. The next episode could follow a completely different sequence.
Agile welcomes mutable requirements requiring adaption to changing circumstances. Watching and adapting to the ever-changing format and stories of Sesame Street conditioned those watching to embrace the ever changing Agile landscape.

Sesame Street built trust in the characters. It showed and taught co-operation and communication. Bert and Ernie regularly had misunderstandings and conflicts. These were always resolved with good humour. It is arguable that these concepts increased, in those young minds, the ability to communicate with others, work in a team and inspired the self confidence that is needed to make Agile successful.

Sesame Street did not generate the ‘spark’ of the Agile Methodology. Indeed, although not called Agile, this methodology was first talked about in 1957 4/6 and was the subject of multiple lectures in the mid to late 1970’s 5/6; but it did not take flight until those preschoolers brought up on Sesame Street embraced it. Sesame Street did set in motion the environment that allowed Agile to flourish.

Sesame Street was indeed an educational program for preschoolers, but it also paved the way for adoption of Agile Principles by instilling the ability to adapt to multiple ‘stories’ in a fast paced time frame, the ability to communicate, the knowledge of conflict resolution and the ability to feel comfortable with delivering an ‘end’ (product) in a short time frame and quickly move on to the next task.

Sesame Street and the future impact on Information Technology was neither a goal nor even a thought when the program was developed. What impacts might current childhood entertainment have on the future?

 

1 http://www.canada.com/news/sesame-street/index.html

2 Early implementations of agile methods include Rational Unified Process (1994), Scrum (1995), Crystal Clear, Extreme Programming (1996), Adaptive Software Development, Feature Driven Development (1997), and Dynamic Systems Development Method (DSDM) (1995). These are now collectively referred to as agile methodologies, after the Agile Manifesto was published in 2001. [5]

3 http://www.agilealliance.org/the-alliance/the-agile-manifesto/

4 dcg-sms.davidconsultinggroup.comWhat Is Agile: Values and Principles – DCG-SMS – David

5 http://www.antiessays.com/free-essays/449120.html

6 http://en.wikipedia.org/wiki/Agile_software_development

Posted in  All, Agile Testing, Business of Testing | Tagged , , , , | Comments Off on Did Sesame Street Impact the Adoption of Agile Methodology?

To Test or Not to Test?

What is COTS?

Commercial Off-The-Shelf (COTS) software is pre-built software usually provided by a 3rd party vendor. COTS applications typically require configurations and / or customizations that are tailored to specific requirements of the customer for their software solution. The implementation of COTS software has become increasingly more common as part of the IT strategy within many organizations.

Below are two assumptions most organizations make when they choose to implement a COTS-based solution:

  • Since COTS software is already commercially released and (we assume) vendor-tested, there is no need for the organization to test the COTS application
  • When testing is considered as part of a COTS implementation, the effort should be relatively lightweight and straight-forward with few issues expected along the way

While it’s true that the implementation of COTS software does not follow the traditional software development lifecycle, the problem with the assumptions above is that the organization is not looking at the larger picture and recognizing the impact that configuration, customization and integration of the COTS software will have within the organization’s IT environment.

If an organization were to perform a proper risk analysis of the full impact of a large-scale COTS implementation, they may realize that a testing approach is required that entails effort that is on par with (or exceeds) that of a custom development project.

Types of Testing

The COTS application is assumed to be stable and to have been unit and functionally tested by the vendor. Minimal functional testing of the core product should be required. Therefore, don’t focus on retesting the features of the COTS application itself. Functional testing activities should focus on customized and enhanced areas of the COTS application in accordance with the organization’s testing methodology.

System Integration Testing

Since the COTS application is most likely communicating with other systems, testing the integration points is clearly required. Again, the goal of integration testing is not to verify the functionality of the COTS application, but to assure that the information sent and received from other applications is correct. These integration points should be identified as high-risk areas for potential defects.
As well, if the COTS system is replacing a legacy system within the organization, data migration from the existing application to the COTS application must be tested to ensure that the existing data has been correctly migrated into the COTS application. GUI- and API-based service functions should also be thoroughly tested where applicable.

Security (Role-based) Testing

Security access (roles / privileges) testing should be performed on the COTS application to ensure that vulnerability and accessibility concerns are addressed by performing access control and multi-privilege tests with user accounts. The most important feature of this testing is to verify the individual roles and their permissions to each function, module and unit of the COTS application. This testing is generally conducted using a test matrix with positive and negative testing being performed. Role-based security testing is often a good candidate for test automation.

Performance Testing

All high impact workflows that are critical to the business should be performance tested with realistic usage patterns. These patterns should be simulated with large volumes and loads based on realistic user distribution. Aside from addressing the stated risks identified during the risk assessment phase, performance testing also aims to achieve the following benefits:

  • Gauge the readiness of the COTS application
    This helps to not only ensure that the COTS system will meet its stated service level agreements (SLA’s), but will also help to set appropriate user expectations around system performance. An initial pre-production performance testing exercise will also establish a baseline against which to compare future performance tuning measures.
  • Assess the organizational supporting infrastructure
    The COTS application will be dependent upon organizational infrastructure to support its performance targets. While organizational infrastructure is also responsible for supporting other applications, it should be assessed in terms of its direct support of the COTS application performance targets.
  • Identify performance tuning techniques prior to release
    Pre-production performance analysis will allow the COTS performance testing team to understand, plan for and experiment with tuning techniques that can be used in the production environment to address system performance concerns.

User Acceptance Testing

User Acceptance Testing (UAT) is required as a final confirmation of the readiness of the COTS application and business preparation prior to implementation. During this phase of testing, it is assumed that no major issues with COTS system functionality will be identified and that the only anomalies identified will deal with usability, data content or training issues. When the business users have completed UAT, a formal signoff process is recommended to officially signal approval by the business to implement the system.

Other Types of Testing to Consider

Depending upon the type of COTS application being implemented and its purpose, consideration for other types of non-functional testing (in addition to performance testing) may be required. Listed below are additional testing activities that may be considered for COTS projects. The testing types chosen should correspond to the specific non-functional requirements and SLA’s for each system; therefore, this article will not go into the details of each.

  • Portability Testing
  • Compliance Testing
  • Failover and Recovery Testing
  • Scalability Testing
  • Security (Vulnerability) Testing
  • Maintainability Testing

Summary

Given that the information above can apply to any number of COTS implementations, you can probably guess that the answer to this article’s title question is “We should definitely test!”

With potential short term savings in mind, it may be tempting to dismiss the need for testing COTS applications – but several factors need to be considered. Take the time to analyze the COTS project in order to balance the cost of testing against the potential risk and the cost of failure.

As organizations rely more on vendor-developed products to meet their needs, a test strategy for COTS applications should be ingrained within the organization’s IT methodology. Implementing a COTS application that has been vendor-tested and commercially released does not relieve the customer of the responsibility to test in order to be assured the application will meet business and user requirements.

Posted in  All, Planning for Quality, Test Planning & Strategy | Tagged , , , | Comments Off on To Test or Not to Test?

Risk Mitigation – Scarcity Leads to Risk-Driven Choices

I presented “Risk Mitigation – Scarcity Leads to Risk-Driven Choices” to yvrTesting.com and the Annex Consulting Group, and I wanted to share that material with you.

risk mitigation and scarcityImage credit: http://sbhshgovapmacro.wordpress.com/what-is-economics/

Testing is a recognized mitigation solution for certain risks on a software project.  Additionally, it is understood that not doing testing will certainly raise the likelihood and impact of risks to the project, and therefore to the business.

Then obviously, if we want to have a nice smooth-sailing project that will contribute to the success of the business, we must be sure to test…a lot…right?

In the face of limited resources/budget and a tight schedule, the mitigation strategies for all the project’s risks compete with each other and with the work of actually getting the functionality built.

When there isn’t enough of something to let us do whatever we want when we want, we are dealing with ‘scarcity’.  And in the face of scarcity, choices, often tough ones, need to be made.

In the presentation, we explored the implications scarcity has on risk mitigation and discussed what we can do to make those tough, risk-driven decisions more straightforward.

You can download the slides here: Risk Mitigation – Scarcity Requires Risk-Driven Choices

 

Posted in  All, Planning for Quality, Risk & Testing | Tagged , , , , , , , | Comments Off on Risk Mitigation – Scarcity Leads to Risk-Driven Choices

Making Numbers Count – Metrics that Matter

As testers and test managers, we are frequently asked to report on the progress or results of testing to our stakeholders.  Questions like “How is testing going?” may seem simple enough at a glance, but there are actually a number of ways that one could respond when asked.  For example:

“We’re on-track.”

“95% of test cases so far have passed.”

“We found 15 new defects yesterday.”

While each of these responses does provide factual details about the status of testing, it is highly likely that none of them give all of the information that is being sought.

Good metrics are about more than just data.  Used properly, they can be powerful communication tools that draw back the veil on testing and provide transparency to the process.  Used improperly, they have the ability to send the wrong message to stakeholders and trigger false alarms or, even worse, to hide problem areas and give a false sense of confidence when things are not going well.

While many organizations do not have comprehensive metrics programs, all organizations have a need to provide information to their stakeholders.  These stakeholders need information about the progress and status of testing in order to make important decisions, and metrics are a key tool in delivering that information.

For test managers, metrics also play an important role throughout the test process.  Starting in the early stages of a project, metrics give us a basis for providing estimates, as well as a way to define objective suspension and exit criteria.   Once testing has begun, metrics serve as an ongoing risk management tool, allowing us to quickly identify delays or problem areas as we measure progress and evaluate against any pre-defined targets.

As we approach the end of a test cycle, metrics will tell us whether or not we’ve achieved our targets and help us decide whether or not we should continue testing.  Even once the project is complete, metrics continue to provide benefit.  By analyzing what was done throughout the course of the project, we are able to evaluate the process itself and implement improvements for future projects.  This can be as simple as comparing estimates to actuals, or can involve more complex processes, such as root cause analysis for defects.

While they have the potential to provide many benefits, metrics are less of a science and more of an art.  With that in mind, here are some key points to consider when incorporating metrics into your testing process.

Metrics, like anything, should be planned in advance.
You can’t report on data that you haven’t captured and before metrics can be captured, they must be defined.  The first step in this process is to understand your reporting needs.  Doing this analysis up-front will allow you to identify what data elements are needed and how they must be broken down before setting up any tools to capture them.

At the same time, it’s also important to be clear about what each metric represents.  For example, what is an “open” defect?  What is considered “resolved”?  Definitions of these terms need to be applied consistently from one report to the next and from one project to the next.

Metrics need context.
There’s a saying that “There are three kinds of lies: lies, damned lies, and statistics.”  Unfortunately, testing metrics have managed to earn a similar reputation.   This is due in part to their openness to interpretation.  When used in isolation, metrics can easily be manipulated to make a just about any situation look either good or bad to suit a person’s needs.

Of course, metrics aren’t always used for the purpose of deceit.  Still, even when there is no intent to mislead, stakeholders can still draw the wrong conclusions if no context is provided and they are instead left to interpret the data on their own.

Whenever metrics are presented to stakeholders, it’s important to ensure that that their significance is easily understood.  Since this significance isn’t always obvious, it can be helpful to provide textual summaries to accompany metrics.  This narrative provides an opportunity to comment on progress, explain anomalies and identify any areas of concern.

A simple chart is a clear chart.
How you present information is sometimes as important as the data that you are presenting.  If the presentation is unclear, any potential meaning or message behind the data can be lost.  As mentioned above, a textual summary can be helpful in providing the necessary context when reporting on metrics, but so too can the right chart or graph.

The best chart or graph is one that immediately draws the target audience’s attention to the important points or trends and is not cluttered with unnecessary data that might distract from those.  Just like Goldilocks with her porridge, the goal is to get the level of detail “just right” – not too much and not too little.

To ensure charts and graphs are as clear as possible, it’s always best to include proper titles and labels, as well as any trend lines or annotations that are needed.  Where applicable, red /yellow / green indicators can also be very useful for helping stakeholders interpret the data.

As an additional step, you may also consider using scorecard or dashboards views to present sets of related data elements, rather than relying solely on individual charts or graphs.

Be on the lookout for trends.
Just as it is impossible to measure the speed of a vehicle with a single point of data, so too is trying to measure the progress of a test cycle.  Only when a series of data points are examined, do trends in that data start to emerge.

Trends are helpful because they allow us to differentiate between systemic behaviour and temporary anomalies.  They also allow us to make predictions about the future.  Of course, the validity of these trends and the accuracy of any resulting predictions increase as more historical data is considered.  Too often, organizations consider only a limited set of historical data when looking for trends.  While this may suffice for measuring performance or making predictions within a given project, it does not allow for continuous process improvement at an organizational level.

Metrics influence tester performance, but not always in the way you might think.
How do we assess the abilities of a tester or compare the skills of one tester to another?  Since the most visible tasks a tester typically performs are executing test cases and logging bugs, it’s not hard to see why some people choose to evaluate testers based on the number of test cases they’ve executed or the number of bugs they’ve logged.  This is actually a very narrow view of testing and, while it may be seen as a good way to motivate testers, it can actually have unintended side-effects.

People typically work to optimize what we measure them against, but often this comes at the expense of the things we aren’t measuring.  For example, if we measure testers based on the number of bugs they log, how likely is it that they will spend their time thoroughly documenting test cases and defects or coaching other testers?  On the other hand, how likely is it that they will focus on finding simple, cosmetic defects or logging variants of the same issue to artificially inflate their defect count?

When it comes to metrics, more is better.
Stakeholders don’t all have the same needs, nor do they always know what questions to ask in order to get the information they are looking for.  As a result, many conversations about testing status tend to focus on only a few of the more basic metrics, such as completion percentage, pass rate or the number of open defects.  While there is nothing inherently wrong with any of these metrics, it is important to recognize that there is no single metric that fully represents the status of testing or the quality of the product that is being tested.  Metrics are situation and context-specific.  There is no “right” answer and there is no silver bullet that will solve all your problems.  The key lies in choosing the right set of metrics for each particular situation, then presenting them in a meaningful way.

No matter who the audience is or how they are presented, metrics will only ever tell part of the story.  In reality, metrics are most beneficial when they are used as a starting point for discussion and further investigation.  They give us clues about what’s going well and what isn’t and show us where to focus our attention.

So, while we must take care to not put too much stake in our metrics, we should also be sure not to ignore them entirely.  As with anything, the best approach lies somewhere in the middle.  By finding the proper balance and approach for your organization, you can help ensure you are only using metrics that matter.

Posted in  All, Planning for Quality | Tagged , , | Comments Off on Making Numbers Count – Metrics that Matter

Quality Without Governance = Disaster

I found another example of how an organization’s governance undermined its quality discoveries, to the detriment of its customers and society at large.

GM Cobalt Delayed Recall

The link describes a scenario where General Motors recalled its 2005-2007 Chevy Cobalt due to an ignition problem.  The summary is that:

  • If the ignition is contacted in a certain way, the engine will shut down.
  • If the engine shuts down, the airbags will not deploy.
  • If the airbags do not deploy, the safety of the driver and passengers will be compromised.

There were multiple deaths arising from this automotive hazard.

According to the article, the technical staff successfully recognized and reported the ignition issue.

A GM engineer experienced the problem while test-driving one of the vehicles in 2004 according to deposition transcripts provided to CNNMoney by Cooper. GM’s engineers concluded there was a problem with the ignition switch in 2005, the depositions showed.

“Testimony of GM engineers and documents produced in Melton v. General Motors et. al., show that the automaker actually knew about the defective ignition switch in these vehicles in 2004 before it began selling” the 2005 Chevrolet Cobalt.

CBS News has learned GM’s recall is coming 10 years after the defect was first discovered and seven years after people began to die.

My point in raising this subject is not to disparage General Motors, but to draw attention to the fact that our Quality profession is fundamentally impotent and powerless unless the organization’s governance is willing to make the necessary commitments and decisions to follow through when quality problems are discovered.

In our profession we devote considerable time to the tactical methods and techniques to discover quality outcomes.  However there is inadequate and insufficient attention dedicated to synchronizing the Quality function with the Executive or Senior Management.  For every high profile example, there are likely ten or twenty examples at different levels.  This is a very substantial challenge.

Posted in  All, Business of Testing, Planning for Quality, Risk & Testing | Tagged , , , | Comments Off on Quality Without Governance = Disaster

Testing User Experience – Should You Care?

One day after work, I decided to go to a newly renovated location of my gym to try it out.  My objectives were to get a workout in, see where this new gym was and whether I would like working out there as it is closer to my office.  Standing at a street intersection, I could see treadmills through a window on the 2nd floor of a building.  I then looked for a sign to determine where the entrance was, however, they only had a small board sign in front of the building and the distinguishable gym logo was covered by advertisements / promotions.  When I entered the gym, I was welcomed with a warm air of fitness endorphins and shiny exercise equipment.  Being a first-timer at this gym, I asked a staff member where the changing room was.  The staff member gave me instructions how to find it and the changing room was not only small, but the layout was broken up.  Once on the gym floor, I scouted for the equipment and weights that I would normally use.  While the gym seemed to have state-of-the-art equipment, it felt like the space was overly cramped.  And again, the layout didn’t flow well and appeared poorly organized.  Finding room to work out was quite a challenge and, when I finally did, I had to drag the weight plates from the other side of the room.  Since the layout of the gym I normally go to is more user-friendly and the weight benches are in the same area as the bars and weight plates, I would have gotten more exercise in the same amount of time.  It reminded me that some software products may have an impressive technology stack, the functionality is there but they are hard to get to. Take for example a typical university website; according to Nielsen Norman Group (http://www.nngroup.com/), the top information students and visitors look for are academic programs and course listings.  However, on a lot of university websites, the user has to click through several menus before they can get to a list of available courses.  Adding a course finder on the homepage would make it easier for users to find the information they need immediately. But why do some sites often lack this simple feature? The functionality is there and the requirements might be met, but I suspect that no one really tested the user experience (UX)…

My gym experience is only one example how UX is affected by design of everyday things.  So you ask, what is UX?  It is a term for end-users’ overall satisfaction when interacting with a product or a system.  UX has become one of the most defining factors for successful products, and it includes everything users see, hear or do and their emotional reactions.  For most testers, usability testing comes to mind when UX is mentioned however, usability testing is only one aspect of a full spectrum of user experience. Why should testers care about UX?  And why is it important to have UX knowledge?

As testers, we play a role as users’ advocates.  And that is why we should care about UX.  I take pride and personally feel fulfilled when users enjoy the application I helped test.  In a past project, while reviewing mock-ups to develop my test cases, I recognized an opportunity to improve the UX of the application under test (AUT).  Typically, there is a lack of defined requirements to validate a UX design from a testing perspective.  I think most UX are validated through usability testing and they are usually led by the UX designers.  How did I test UX then?  Performing exploratory testing led me to discover that the ease of use was difficult and some elements of the layout were making me question what I was supposed to do on a page.  One example was the name of the page and the selection buttons that ask the user for input were confusing and didn’t match.  So I initiated work with the product manager and business analyst to understand the business driver of certain functionalities, along with the interaction designer to offer suggestions in improving the layout of some pages.  The exercise required some role-playing on my end, taking into consideration the different population of users and what would make their experience more enjoyable.

It is also necessary to learn if the suggested improvement is feasible from a technical standpoint.  I would say that I quite enjoyed this part – working with developers. We had brainstorming sessions where we worked together to try to better understand the UX-related issues, and gather solution suggestions that we presented to the product manager.  The developers really were talented and I learned so much from them. My understanding of the technical constraints improved greatly through these discussions.  In return, the developers found that talking to a tester helped them improve their code design and how their code should handle exception cases.  We were working proactively as a team and reducing the possibility of development rework and increasing the product quality by injecting some UX testing in the early phase and not waiting until the end.  It was not a smooth process – there were some back and forth in the development but, in a way, it really helped the client refine the product.  The product manager was also very appreciative of the extra testing effort to improve the product’s UX.

Of course, the benefits of teamwork described above are not unique to UX testing; the dynamics between testers and developers are similar when performing functional testing.  However, UX designers and product owners may need to learn a new appreciation as to why and how testers test UX.  I believe that the skills necessary to carry out timely UX testing can be developed.  Empathy with product users, your creativity, the professional relationship with your developers, your communication skills, your knowledge of UX and, most importantly, the delicate balancing act of being a user advocate and a liaison between teams can all be learned.  It takes courage to step out of your comfort zone and do UX testing which may not typically be your primary job, but it is all worth it.

Will I ever go back to that gym again?  Maybe I will.  It may help to talk with the manager and tell him my experience from a tester’s perspective.

Posted in  All, Planning for Quality | Tagged , , | Comments Off on Testing User Experience – Should You Care?

Own Your Approach – Drive Process Improvement

You and your team have to live with the impact of poor internal quality every day, on every project.  But fixing these issues or improving the situation is not often a top priority.

“No one keeps track of the costs of (internal) poor quality…Drop revenue or market share by a few percent and you’ll have the attention of the board of directors.” Hope Happens, Linda Hayes.

However, do you need to wait for someone else to fix it for you?  Or is there something that you and your team can do to help yourselves?

The beginning of a new year is a time for some to set goals for self-improvement.  Why not borrow from this idea for your own project team?  Why not tackle some process improvement this year?

This Year board
Image credit: Robinsan via Photopin CC

 A Structure for Action

The word ‘process’ can conjure images of red tape, scrutinizing audits, stacks of paperwork, meeting after meeting, multiple approval levels, etc.  But is this what process is supposed to feel like?

Consider the following definitions of ‘process’:

  • “A series of actions that produce something or that lead to a particular result”, Merriam-Webster Online Dictionary
  • “A specific ordering of work activities across time and space, with a beginning and an end, and clearly defined inputs and outputs: a structure for action”, Process Innovation: Reengineering Work Through Information Technology, Thomas Davenport via Wikipedia

In both of the above cases, a ‘process’ is expected to produce ‘something’ through ‘action’.

It is most likely that the ‘something’ could/should be evaluated in terms of quality attributes and the ‘action’ will need to be effective and/or efficient in producing the ‘something’ to a standard of quality in order to be successful.

Total Cost of Quality curvesTherefore, the total cost of process in an organization can be similarly modeled as the Total Cost of Quality (of which it is a reflection) and, as such, can be likewise optimized through practical investments to reduce external and internal failure costs.

Change Is Hard Though

With any sort of resolution or plans for improvement, there are challenges to overcome.

“Why is it so hard to find the motivation to change when we already know that we need to change? It’s painful to change. It’s uncomfortable. It’s difficult. And avoiding pain is innate. It comes standard in all human models.” Why Change Is So Difficult (and 9 Ways To Make It Easier), Brock Henry.

Here are just a few change avoidance excuses you might think of, or hear from others:

  • Inertia: Why should we change?  I don’t see the value.  We’ve always done it this way and it has worked well enough. Process is just bureaucracy – more overhead, more documentation/forms, and less fun. Besides, everyone has these problems
  • Fear of Failing: It will never work.  It hasn’t worked for others I know elsewhere so why would it be different here. Anyway, we’ve already tried to do this before
  • No Budget: We can’t afford it.  We are too busy, too lean/thin.  Self/Process improvement is a luxury.  The focus has to stay on getting something out the door so we can bring in more revenue and grow the company
  • Need Expertise: This is a big/complex problem set. We can’t do this by ourselves; we are just ‘rank and file’, not process people
  • All At Once: We need to start over with a ‘proper’ approach, one that is proven (eg: popular), ordained by experts/gurus as the ‘best way’, and approved/funded/planned by management.  It will be a Big Change after all

There is a tendency, because of these and other feelings, not to do anything until it becomes more painful to continue with the status quo than to change.  Then, of course, something has to be done, and FAST!  And when we rush, we all reach for the closest, silvery-est-looking bullets we can see.

“Change before you have to.”Jack Welch

Instead, let’s endeavour to be as proactive as we can to improve our situation, while still getting the expected day-to-day (revenue producing) priorities checked off.  For that to work, we need an approach that is as natural as possible and gives real returns with relatively short turnaround, while avoiding the need for hard investment and the eyes of skeptics until the results are streaming in.

The Deming Cycle

We need to avoid the basic barriers so start-up is easiest, get some improvement initiative success stories, build up measurable return on investment (ROI) from the benefits, give the changes visibility, and parley that into buy-in for more change.

“Quality is everyone’s responsibility.”W. Edwards Deming

Based on the scientific method (hypothesis–experiment–evaluation), the Deming Cycle, also known as PDCA (plan–do–check–act or plan–do–check–adjust), is a straightforward, iterative, four-step approach for the management and continuous improvement of processes and products.

PDCA Cycle
Image credit: Karn Bulsuk via Wikimedia Commons

“Rather than enter ‘analysis paralysis’ to get it perfect the first time, it is better to be approximately right than exactly wrong. With the improved knowledge, we may choose to refine or alter the goal (ideal state). Certainly, the PDCA approach can bring us closer to whatever goal we choose.” Toyota Kata, Mike Rother via Wikipedia

This uncomplicated continuous improvement approach will integrate well within an Agile organization or otherwise.

A group of collaborators drawn from the trenches of the typical functions of a project team can form an informal, self-organizing, Agile-like ‘PMO’.  Their focus will be to collect, propose, and pilot new ideas for improving the process framework of the organization.  Depending on the size of your organization or the scale or participation you seek out, this group could very well be your current team!

Let’s Make A(nother) List!

The initial kick-off discussion can be time-boxed to an hour, or two at most, and should provide some common themes for where to propose improvements.

“People who make resolutions are ten times more likely to change behaviour than those who have identical goals and motivation to change, but don’t make their list.” Dr. John C. Norcross, Global News.

Assemble your process improvement team, or ‘mini-PMO’, and consider some retrospective-type questions to start thinking about how projects are typically planned and executed within your organization:

  • What happened during this last release/project?  Are those things typical?
  • What surprises or challenges did you face on this last release/project?  How did you deal with them?
  • Do you feel you did or did not impact the quality of the software that was released?
  • When was the team at its best/most effective?
  • When were you at your best/most effective?
  • What could be done next time to improve:
    • The quality of the software?
    • The functioning of the team?
    • Your own contribution?
  • What should we, as a team:
    • Avoid doing in the future?
    • Change for next time?

And explore the ‘Why?’ for each.

Note: It’s easy to get caught up in talking about what is not working, what needs to be fixed, and what changes need to be made.  Including some questions in your discussion that will highlight the positives as well will help the team feel that they are building on something that is already working to some degree but could be better.

Note: If you are bereft of ideas, then maybe you are doing great and all your projects are successful – That would be wonderful.  Or, if you just need to shake up the idea generator a bit, you could take a look at standards like CMMI, SPICE, ISO, PMBOK, and similar as sources of ideas.

Drive Improvements with an Agile-like Engine

Now you can analyze your list of collected input for potential solutions.  Break the solutions into sensible steps (or stories).  Prioritize them by opportunity/timing, return value, difficulty, dependencies, etc.  Pilot them singularly or in small batches.  Measure impact and collect feedback.  Adjust and re-pilot or roll-out to the organization.

Following an Agile-like approach with our Deming/PDCA Cycle, illustrated at a high-level below, seems natural and should integrate relatively seamlessly with whatever flavour of development methodology you are practicing.

Agile Process Improvement
Image credit: adapted from Agile Process Improvement – Sprint Change Method,
wibas Team

In every list of ideas for improvements there will be some “Quick Wins” that you can put at the front of the backlog and some bigger/tougher/more expensive ideas that you can leave until later.

Starting the first few iterations with the Quick Wins shouldn’t require much effort to get buy-in or to implement.  And, when implemented, they can pave the way for early benefits, success stories and generating that initial ROI to ‘fund’ the next round of improvements.

Finished the Easy Stuff?

Your initial list of improvements will no doubt take you a while to work through, and likely will spawn more ideas along the way.  Remember, each idea you implement successfully is a step on the road of accomplishing your goals of process improvement.

And, anytime you want to dive a little deeper, you can try collecting input on some of the bigger questions of (business) life, such as:

  • What is success for your project/organization/business? Why?
  • What is your value to your customers?
  • What makes you different/special to your customers?
  • What are the impediments to your being #1 in your market?

After a while, you will have a living representation of your full set of processes.  And, you will know they are right for you.  At least until your next idea…

Conclusion

“When teams take ownership of outcomes, they will always do what it takes to ensure a project succeeds. With this combination of attitude and culture, the entire organization—and the customer—wins.” PMO 2.0: Rebooting the IT Project Management Office, Tony McClain

Process should exist to provide a supporting framework to individuals and teams so they can get things done, while helping ensure a standard of quality in doing so; not to restrict people from being valuable/productive.

Listen, Learn,
Take, Try
Don’t Follow…
Own Your Approach

Take a look and see what ideas you have for optimizing your investments in the processes you have.  You may already be ‘spending’ more than you have to.

 

Posted in  All, Planning for Quality, Team Building | Tagged , , , , , , | Comments Off on Own Your Approach – Drive Process Improvement

Tending to Your Tester Garden: Growing a Great Test Team

Creating a great test team is a lot like gardening, you need to find the right seeds and make sure to plant them in an environment where they will germinate, grow and thrive. But what seeds do you look for, and how do you create the right conditions for them to develop into strong plants?

First, we need to ask ourselves: What actually makes someone a good tester? What qualities are we looking for, and how can we help testers grow those qualities? And can all the qualities we’re looking for actually be developed? Maybe there are some qualities we’re born with, traits that cannot be trained, whereas other qualities are skills that require practice to be obtained. Ten different qualities that I think you should consider looking for in a tester are:

  • Curiosity: Testing is a continuous quest for knowledge, and testers need to have the drive to explore and learn. Curiosity can perhaps not be trained, but it can most certainly be trained out of people. We are often told to follow the rules, where instead we should be rewarded for being curious.
  • Focus: Testers need to be able to stay focused and not get distracted, or bugs might be overlooked.  But is too much focus also bad? Testers need to get sidetracked sometimes, and use their creativity to go exploring. Focus is a skill that can be trained and supported by using time management techniques. Some people also have a predisposition to focus.
  • Observation: Testers must have great observational skills – it’s our observations that make us valuable. Focus can either support observation, or counteract it. Observation can be trained, for example, by playing certain games.
  • Abstraction: Abstraction is about seeing the core problem, to simplify, and to rationalize. Often, the inner core functionality is hidden, or obscured, by irrelevant details, which testers need to be able to see beyond and test what really matters. The ability to make abstractions is closely related to analytical and logical thinking, and is one of the more important aspects of a good testers toolbox. I believe abstraction can be trained, but some people are more prone to this capacity. Train by having the whole team work together to review a problem and discuss how it can be simplified.
  • Empathy: Empathy is about being able to put yourself in someone else’s position and understand their feelings, but it mustn’t be confused with sympathy. Testers should empathize, but not sympathize. Sympathy means that you feel something about somebody else’s feelings. Testers need empathy to put themselves in the users’ position and gain a user perspective on what is being tested. Can we learn empathy? Some argue that we can train our ability to feel empathy, within certain limits. The key is to be aware. Roleplaying can help, and there are methods that actors use to understand empathy better.
  • Communication: The information we gather as testers is worthless if we cannot communicate it. Communication is most certainly one of the most important qualities for a tester. Communication can be trained.
  • Courage: Having courage includes having integrity. As testers, we need to have the courage to say “No” and stop a release even when the stakes are high. The consequences can be significant, not only to us personally, and it takes courage to stand up for what you believe to be right. Courage is a trait that can be encouraged, but it can’t be trained. We need to make sure testers are empowered and have the right support to be courageous.
  • Perseverance: Sometimes testing really is a struggle. Maybe the deliveries are late, quality is low, testing is being reduced or the application and the tools are complex and hard to use. But as testers, we cannot give up. Testers cannot choose the path of least resistance. We might not be able to train perseverance, but we can at least train testers to understand why it matters.
  • Passion: Passion is closely related to courage and perseverance. Can you be courageous and persistent without being passionate? Testers have to care. Passion cannot be trained, but it can be lured out of the depths of people who didn’t know they had it themselves. It can – and should definitely –be encouraged and amplified. Passion also tends to be contagious.
  • Technology aptitude: Some people insist that testers need to know how to code – and that certainly is an advantage – but what all testers really need to have is technology aptitude: a strong natural interest and ability to learn and understand technology. This can’t be trained, but it can be encouraged and maybe even triggered by providing an interesting work environment and stimulating tasks.

What do you think makes a good tester? What qualities do you value? You also need to think about whether you want to sow seeds and invest the time and energy it takes to grow them into plants yourself, or do you want to spend a little extra and get already cultivated plants that will require less attention?

Whether a quality that you value can be trained or not, needs to affect how you select your seeds and plants. If you really think a trait is important, you should understand that any tester you hire must have it from day one; you can’t necessarily expect to be able to train it. Finding the seeds and plants you want is not enough to create a thriving garden, though. This requires a diverse mix of plants that help each other grow. As gardeners, we need to find symbiotic plants and take care to plant them close together.

Having found and planted the plants you want certainly doesn’t mean your gardening work is over. Your tester garden needs continuous fertilizing, watering – and weeding. Weeding is a tough necessity that requires care and persistence to keep your garden productive.

Being a gardener is hard work, but reaping the fruits of all the hard labour makes the effort worthwhile.

Posted in  All, Team Building | Tagged , , , , | Comments Off on Tending to Your Tester Garden: Growing a Great Test Team