Mobile Application Testing – It’s Not All About the Devices

When designing our mobile application testing strategy, it is important to consider that: it is not all about the devices – but it IS about all the devices.

The distinction comes from the fact that it is not possible to “brute force” test all the combinations of devices and operating systems.  And just not testing?  That is not a prudent option either.

For a bit of visualization, take a look at:

Our test strategy needs to be intelligent and thoughtful, the result of investigation, analysis and consideration, designed to drive us towards ‘good enough’ quality for our (business) purposes at a specific point in time.

We have to be smart about it.

All Are Not Equal Under Test

Whether we are testing an app that is for public consumption or one that will be only used by the business users within our company, we need information about those users and their requirements. Having operational data or specific requirements pertaining to what the hardware and mobile operating systems must be, or what is allowed to be, can go a long ways to prioritizing our mobile application testing.  Additionally, understanding or profiling our users and their usage patterns will also provide valuable input.

From this information, these requirements, we can assume that we will find criteria by which we can prioritize the platforms we need to test our application upon.

Platform = specific device + a viable operating system for that device

To meet our test strategy goal above, we will need to perform the appropriately responsible amount of testing on an appropriately responsible number of platforms.  By merging our supported platform requirements with our user profiles and their usage patterns, we will be able to matrix sets or groups of supported platforms with amounts or degrees of testing effort.

Using an example of three groups, we might have a conceptual matrix like the following:

mobile application testing - prioritizing mobile devices

Then, for each group, we might define the types and level of testing in each as:

mobile application testing - level of testing per prioritized mobile devices

Note: We might also achieve further effort reductions, with little additional risk, by performing an analysis to identify “like” sub-groups of platforms that are so alike that we might reasonably select a single platform to test upon as the “sub-group representative”.

Process Multiplier Considerations

To manage the amount of test effort required for the project, we need to also be aware that the number of devices and operating systems can weigh heavily on some areas of our testing process.

For example, when isolating our defects, when they come back not-repro, or when re-testing them once they are fixed, do we:

  1. Check all the platforms to see where the bug is present?
  2. Just look at the platform where we found the bug?
  3. Check on one other “like” platform?
  4. Check on another “like” platform and an “unlike” platform?
  5. Or…?

The ‘gotcha’ is, of course, the more platforms we crosscheck the bug on, each time it comes past us, the more effort we have to put in.  But the fewer we check, the more risk we are taking.

Balancing Tools & Automation

Another example where large numbers of potential test platforms require thoughtful management of effort is when it comes to tools and automation.

Ideally, automation should be able to save us effort across the table above by helping us automate large chunks of tests that can then be run “auto-magically” across multiple platforms, even simultaneously.  However, device emulators and simulators are not the real deal and as such they will each have their own quirks and differences that will impact the test results.

For best results and risk mitigation, we should plan a balance of virtual and on-device testing, with a balance of automated and manual testing, using a mix of home-grown, free/open-source, and COTS tools.

Training For Mobile Application Testing

There is an ever-changing body of knowledge around mobile testing pertaining to the tricks, tools, design requirements, and gotcha’s for the platforms of today and yesterday.

We need to ensure that our teams are up-to-speed and ready for mobile application testing across a wide-range of platforms, while keeping the test results clean and detailed enough to provide the developers the information they need to efficiently fix the issues.  And, of course, the more platforms we have to support, the larger the knowledge base each tester needs to absorb and maintain.

Our test strategy should reference what knowledge is expected to be captured, communicated and maintained outside of our heads, and how.

Conclusion

So it IS about all the devices, but not in the sense that we should try to test everything on as many platforms as we can get our hands on.

Because of the proliferation of devices and operating systems, our test strategy needs to have a “smart” approach for our mobile application testing to get the maximum return on investment while minimizing risk.

Posted in  All, Automation & Tools, Planning for Quality, Test Planning & Strategy | Tagged , , , , , | Leave a comment

Mobile Testing – An Interview with Melissa Tondi

I recently interviewed Melissa Tondi on the topic of mobile testing. Melissa is the founder of Denver Mobile and Quality (DMAQ), Head of QA at ShopAtHome.com and has had significant experience working in, and speaking publicly about, mobile testing. We wanted to share that conversation with you.

Mobile Testing

Christin: What made you interested in mobile testing? How did you end up working in that space at all? 

Melissa: About 6 years ago, when I was head of a large global QA team, our company had the edict of mobile first. From our CTO down, we knew we needed to be ahead of the curve around mobile. I, and many people 5-6 years ago, didn’t have a great strategy to get started.

We put a lot of effort into making sure we didn’t repeat the type of mistakes that often come with introducing disruptors and new technologies into large organizations. The biggest problem we were trying to solve was: how to implement the type of testing that mobile needed without necessarily increasing staff while on a minimal budget. Most importantly, how do we become as efficient and productive as possible as quickly as possible, while introducing this new disruptive technology into an existing QA team?

This was a really big challenge and we had to solve it for global teams in a dozen or so more offices. I was challenged to create a strategy that didn’t just work locally. It had to be able to scale up enterprise-wide. Once we did that, we deemed ourselves successful. I really enjoyed the challenge of doing that.

About 3 years ago, I was offered a position to manage a mobile specific team. With the knowledge I had gained, I was able to build something more solid and scalable from the ground up. That challenge peaked my interest and I have been doing that ever since.

Creating a Competitive Edge

Christin: 6 years ago, you must have been still really pioneering mobile testing?

Melissa: It was something, it really was. We were a technology company and really needed to be on the cutting edge of what was happening and what our users were looking for. We had a user base of about 60 Million users globally, primarily US- and North American-based, but we saw the trends of where mobile was taking us. We knew we needed to come in here.

As far as being a pioneer, there were pioneers before me, but we were able to create a big case study on mobile. The early adopters on our platform had as seamless a mobile experience as could be had at that time.

Christin:  You must have created a competitive edge for that company. Do you feel other companies have caught up by now?

Melissa:  When I talk about this at a public forum or in a consultation setting, I divide companies into three categories. One are the innovators, the bleeding edge, cutting edge technologists, who would be in the top ten disruptors of technology and are creating technology to be consumed by the second tier which I call mainstream companies. This is mainly where most of us have worked. They are often early adopters, the first to consume the new technology. The other companies may be slower adopters of technology.

We were really straddling between the top tier creators and straddling very early mainstream adopters, with mobile especially.

Fast forward from 6 years ago when I first got my feet wet in mobile technology and testing, fast forward to 2-3 years ago, companies were adapting the mobile first mandate or edict, but I’ve seen lower adoption rates in the mainstream companies than I would have thought. I am also a surprised with the slower adoption of mobile testing and the building of testing strategies around that. I think we should be further ahead in the mainstream companies than where we are today, just based on conversations with co-workers and people in the industry.

Security, Performance, and Accessibility Testing

Christin:  One thing I am really curious about is security in mobile. Companies seem very willing to take a lot of risks around mobile testing. Where do you think we are at with that?  

Melissa: I agree with that. In that large organization I was running, around that timeframe or a year or two before that, security testing from a functional standpoint was unheard of. We treated any security testing as something not in our domain of expertise. It was almost something we shied away from, and threw it over the wall for the ethical hackers that were part of the company but had very little interaction with the agile team.

So we got a head start on the functional security testing. I really feel strongly about the organization OWASP. They publish a top ten testing guide, top ten vulnerabilities and several of those top ten are things we can bring into the functional testing team. So I am a big proponent of aligning with OWASP where applicable, bringing in some of that security testing that we shied away from in the past.

One of the keys we should be really paying attention to, from a mobile security testing standpoint, is really understanding how PII data, Personally Identifiable Information, how that flows from user input and how that actually posts to the backend.

We can bring a lot of emphasis to PII in the early round of testing. Making sure it meets the criteria for any regulatory boards or any type of auditing perspective. And then, there is also the alignment with OWASP on techniques on how to actually test that data through.

Christin: There is frequently a big gap between functional testing and other areas. I would say security is a big one, and performance testing. I am surprised that we, as testers, are so fast to dismiss that as something as not our area of responsibility.

Melissa: A week or so ago I did a presentation, “Shifting the Test Role Pendulum”. I identify three major areas that we have thrown over the fence for the lack of a better term. Performance, security and accessibility testing are the three areas I feel we do a bit of a disservice. Where we either don’t put emphasis on them in the functional testing team, or we simply don’t address them at all.

You are right, we have shied away from that, saying this isn’t our responsibility, not my expertise or I don’t know how to do performance testing. Performance testing is one of those ambiguous terms and a catchall phrase for how the user experience is going to be under load or in concurrency. There are a lot of tests that we can bring into the functional testing team and for those companies that are practicing agile, within those agile teams.

Balancing Manual Testing vs. Automation

Christin: What do people ask you the most when you are talking about mobile testing at a conference or working at an organization? What is the top concern?

Melissa: I am torn between two. I think the biggest one is: what is the proper balance of testing on virtual vs. physical devices?

One camp says: physical devices all of the time, 100%, because that is what is guaranteeing the user’s experience. This means securing and maintaining physical devices. You also have a lot of manual testing. The other camp that says: all or almost everything is tested against simulators or emulators, using the SDK’s on the platforms that the developers are using.

I think that is the biggest one, because you really do see camps that are one or the other. They are either saying 100% on physical or mostly all on virtual devices. Then I come and say, “No actually there can be a nice balance of both. You don’t have to just do one or the other.”  It is more designing the strategy so you understand what the important tests are going to be and overlay that with how, or where, you are going to test them.

Christin: Testing on real devices, means more manual testing. Do you find organizations that do more test automation, in general, are more biased towards using simulators or emulators? And organizations that have more manual testers are more focused on real devices?

Melissa: I would say that’s a good way to categorize it. There are a lot of cool automated tools out there and, as technology increases, we will see more companies starting to invest in solutions around automation. I have made this statement many, many times: we really need to look at automation as making the exploratory or human-based testing more efficient instead of trying to have a silver bullet solution.

In the Future

Christin: What is next for you, where do you see yourself in 5 years?  Do you see yourself in the mobile sphere, or are there any particular technologies you would like to have a chance to work on? 

Melissa: I am very interested in artificial intelligence (AI) that is coming in and 3D technologies and enhancing the user experience is something I’d love to get into.

I would hope that mobile is not as much of a disruptor in 5 years, as it is now and we embrace it and it becomes just another service that is offered within all of our functional testing teams. We look at it as we would any type of technology, that there is upfront education and training and skill development around it and, if you have a proper overall test strategy in place within your department and infrastructure that supports scaling out technology disruptors, that mobile just becomes another one of those service offerings.

I am pretty passionate, and I certainly love working with companies and consulting with companies and working with their overall test strategy first. In 5 years, I see myself still advising and speaking on that same platform and hopefully mobile becomes mainstream and maybe becomes an afterthought because everyone would have adopted good strategies to begin with.

If a team or company has a good strategy in place to scale and adopt new technology, the actual technology they are adopting becomes secondary. The first part and primary portion is that there is a good infrastructure in leadership and a good balance of technical skills in place, then the technology comes in.

Christin: Thank you for your time Melissa, it has been very interesting talking to you and have raised a lot of ideas in my head.

Melissa: No problem, my pleasure. I always enjoy talking to you and love what you guys are doing over at PQA.

See Melissa’s presentation, “Shifting the Test Role Pendulum

Posted in  All, Automation & Tools, Other, Test Planning & Strategy | Tagged , , , , | Leave a comment

Improving Software Quality with Development Tours

Releasing software in the past was a QA’s dream, plenty of time to check every last corner of the product before eventually releasing to production.  With the move to Agile and the speed at which we need to deliver software, this old school model no longer works.  So what can your QA team do to ensure quality when the amount of time you’re given to test is very little?  Certainly the first thing that comes to mind is a great CI (Continuous Integration) process with test automation.

The problem is fast reliable automation is very difficult to get working right.  I strongly agree that solid test automation needs to be in place however if you wanted to improve quality without automation what could we do?  I believe a practice my team uses to flush out bugs is a great one and wanted to share it with you.

I call this practice “Development Tours”.  The idea is very simple.  When a developer finishes implementing a feature/bug fix, the QA pairs up with them and they get a tour of the feature or bug fix.  These tours have really helped us get things right even before the feature enters our CI pipe.  Thus improving quality very early in the process.

Why do I call this practice “Development Tours”?  The name came to mind after recently reading a book called Exploratory Software Testing by James Whittaker.  In this book James outlines how to break up your software into Tours.  For example the Historic Tour of your software would include legacy features introduced in earlier versions.  Another sample tour area would be The Business Tour which contains the primary business purpose of your software.  Breaking software into tour-able areas helps you key in on what needs to be tested and what areas might be more bug prone.

I thought to myself,  wait a minute, we have these tour areas but I don’t want to explore this code without a “tour guide”.  I want an experienced guide to help me understand the back alleys and pathways of the code.  This guide already exists on your team.  It’s the developer who implemented this code.  How logical is that.

Let’s talk about the clear benefits of Development Tours: (QA pairing with Developer)

  • QA engineer gets to learn more technical aspects of implementation.  Today’s QA is required to be more technical so I love that I get to learn from the developer directly.
  • Developers also learn from QA by seeing the types of bad user scenarios QA uses to flush out bugs.  They learn new techniques around flushing out defects.  What better way to improve software quality than training the devs to avoid implementing bugs in the first place!
  • Review the actual purpose of the feature.  We should already have good Acceptance tests but this is a great time to review these together.
  • Discover newly uncharted use cases which could impact the user.  You’re bound to find use cases you didn’t know about when starting to implement feature even with planning done up front.
  • Gain an understanding of how well implemented a feature is.  Was it implemented line by line from Acceptance criteria or did the developer really flush out outside cases which could cause issues.  I think this occurs a lot. Developers stick to the acceptance like it’s the final word but really at times things come up that need to be fixed and should be fixed on the spot.
  • Walk through the automated tests created around this feature because of course our definition of done includes these tests.
  • Weeds out bugs that are a waste of time.  I’m talking about the very obvious bugs that can be found within seconds of testing a new feature.  I believe we want our QA to use their experience to flush out deeper issues not waste time finding obvious bugs which should have been caught by a developer.  When we tour the feature together we find and crush these obvious bugs quickly.  No need for bug write up etc..

Looking at the points outlined above it’s hard to argue that this simple practice doesn’t improve software quality.  I also noticed since starting that developers are more actively engaging with QA.  I believe you will know this process is working when developers are actively coming to you asking to review features or talk about implementation details.

So, encourage your team to try this early integration development tour.  I think it’s a small change that will help your team improve the quality of your software.

Posted in  All, Agile Testing, Planning for Quality, Team Building, Test Planning & Strategy | Tagged , , , , , , , , , | Leave a comment

Test Planning Game: “Mind Your Own Business”

You might find it familiar to be involved in, or responsible for, the planning and design of testing on a project where the business drivers around why/how the project is important, in the larger sense, are not entirely clear to the team.

To test this, try randomly springing one of these questions on your teammates (one-on-one in the hallway is best):

  • “So, who does this project help (in the real world)?”
  • “Why is this project important to the company?”
  • “Do your assignments add value to the business?  Why?”

It can be revealing.

Test Planning GameAs often an outsider to a new client’s business, I feel one of the most significant first steps is to rapidly integrate with the mindset of the client’s teams and to “get immersed in their world”.

A key conversation to have is regarding “What is success?” on a given project, for a particular system, in that organization, with those users, at this time.

Input to this query would be gathered from:

  • Customer Representatives
  • Technical “Peers” (architecture, business analysts, developers, testers, support)
  • Management (project management, departmental management)
  • Business Organization (strategic planning, corporate management)

In particular, it is always important to understand from the Business Organization:

  • What motivates them (Goals)
  • What they are afraid of (Risks)
  • What will really make a positive difference for them (ROI)
  • What they are NOT willing or able to do/pay for (Constraints)

And of course, projects are routinely constrained in terms of resources, budget, and schedule.  And, when there isn’t enough of one of these, or something else, to let us do whatever we want when we want, we are dealing with ‘scarcity’.  In the face of scarcity, choices, often tough choices, will need to be made.

Being prepared with answers to the above lets us facilitate these decisions while planning and prioritizing our testing activities accordingly.

What’s It All For Anyway?

Testing is always working for someone, either directly or indirectly.  Stakeholders provide input that we use to plan and design testing activities.  Stakeholders are represented by Testing in the course of us performing our verification and validation activities.  Stakeholders consume the data that Testing collects and analyzes so they can make informed decisions.

Testing is Part of a Larger Whole

Of the stakeholder groups, Testing has day-to-day interaction with Technical Peers, frequent contact with Management and Customer Representatives.  But, the Business Organization is often at such a distance that it is heard from rarely, and perhaps only when there are big decisions to hand down; decisions in response to challenges with scarcity that may or may not be directly stemming from your project.

When these tough choices need to be made, Testing needs to have already been making visible the value being added and enabling informed decision-making.

Let’s Play a Game

The following role-play exercise has been adapted from my course Test Management: Leading Your Team To Success.  Typically, this activity would be conducted in pairs, but you should feel free to mix it up as best suits your team.

Step 1: The first person assumes the role of a senior individual in the Business Organization and:

  1. Chooses an industry and/or business type
  2. Selects a type of system and a type of project
  3. Decides on two (2) dominant personality characteristics for themselves to role-play in terms of priorities, preferences, eccentricities, etc. (here are some ideas)
  4. Chooses one (1) common project challenge and/or constraint
  5. Chooses one (1) unusual project challenge and/or constraint
  6. Chooses two (2) project challenges and/or constraints that will be kept secret until Step 4.

Step 2: The second person assumes the role of the Test Planner for the above project, and interviews the first person, navigating the challenges of understanding the first person’s “world” and their personality, to determine/elicit:

  1. What is the project?
  2. What is “success” to them?
  3. What is “quality” to them?
  4. When would testing be considered to be “done”?
  5. What are the project challenges and/or constraints?

Step 3: Based on the information the second person gathers during Step 2, they will describe to the first person an applicable test approach for the project.

Step 4: The first person will then (constructively) critique the proposed test approach based on:

  1. The points or “facts” from Step 1 and Step 2
  2. The two (2) secret project challenges and/or constraints from Part 1 (reveal them now!)

Step 5: Both parties proceed to discuss and negotiate what adjustments and/or compromises can be made to close the gaps in agreement. (Be sure to note down impacts to the definitions of “success” and “quality” from Step 1)

Step 6: Considering all the proceeding steps, both parties summarize:

  1. The revised definition for “success” and “quality” for the project
  2. The Business Organization’s appetite for risk on the project

A test planning game or exercise like the above will help hone your elicitation and negotiation skills for when you next talk to the Business Organization, or any other stakeholder.

Conclusion

Quality can mean different things to different people, and that is largely because we hold different aspects of quality to be more or less important than another, given our context.

It would be a limited view indeed to simply consider the views of a single stakeholder group when defining “quality” for a project.  It would likewise be limiting if the input and context of any significant stakeholder group was left out of that definition.

Remember to remember (to mind) your own Business Organization when investigating what testing can, specifically, do to add value and bring about project success.

 

For related reading, check out these articles:

Posted in  All, Business of Testing, Other, Planning for Quality, Test Planning & Strategy | Tagged , , , , , , , | Leave a comment

Make Testing Your Competitive Advantage

At first glance, testing might only look like a cost, but testing can actually help you reduce risk, get your product to market faster and contribute to a considerably improved customer experience.

Testing is often viewed as a necessary evil, an additional cost to the project that slows everything and everyone down. I would disagree and say that investing in testing is actually a cost and time saving, and that smart testing can be an organization’s competitive advantage in today’s high-paced society.

Testing provides information about product quality and product risk that stakeholders need in order to feel confident that they are making the right decisions, at the right time.

Deciding when to release a product to market means weighing the cost of a delay against the risk and consequences of an application failure. As illustrated by the recent software failure experienced by American Airlines, inadequate testing not only puts your image and brand at risk, but it can also pose a threat to life and safety. American Airlines uses an application that provides the pilots with flight plans, but, after an update, the application would unexpectedly crash, causing several dozen planes to be grounded. This is just one example of a safety-critical problem that proper testing could have prevented.

By shifting left, and focusing on quality from day one, testing can help shorten the overall development cycle, enabling faster time to market. Early testing can find potential issues already in the requirements, saving development from wasting time building the wrong thing the wrong way and potentially having to rebuild major parts of the product.

Emphasizing testing as part of your organizational culture and processes lets you achieve a judicious balance between business drivers and user expectations. Acting as user advocates, testers aim to understand what motivates and engages your customers, and that understanding is a crucial part of delivering a successful customer experience.

Smart testing lets your organization work more efficiently and effectively, while also ensuring a better experience for your customers. Smart testing could be what sets you apart.

Posted in  All, Business of Testing, Planning for Quality, Risk & Testing | Tagged , , , , , | Comments Off on Make Testing Your Competitive Advantage

Performance and Culture – Contrived vs. Legitimate Quality

I read with interest and appreciation the comments on the View from the Q blog about what not to do with respect to Performance Management.

I have encountered reporting situations and the ultimate result is that more energy and effort is spent on creating a convincing report than on attending to the task at hand.

While this applies to any management initiative, I will use quality as an example.

Imagine that you are an independent used car salesman, and you have a car of questionable and uncertain capabilities.  It may or may not be a lemon.

If you were to use this vehicle for a cross-country trip taken by your family members, consider the extent to which you would go to ensure safety and reliability.  No cost would be spared and no effort would be too great to protect your loved ones from harm.  I refer to this as “legitimate quality”.

Now, being a proprietor of a business, you have to sell this used car before it consumes your inventory and becomes a burdensome cost.  In order to convince a prospective customer of its value, a series of convincing checks and inspections are made and passed with full check marks.  Additional enhancements are made to give the impression of superior quality (paint touch-ups, adjusted odometer, over-inflation of tires so that “kicking the tires” returns a firm response).  I refer to this as “contrived quality”.

Executives love their dashboards, but the culture will determine how they are used.  If the culture is punitive, where “green” indicators are rewarded and “yellow” or “red” situations incite hostility and rebuke, then the implicit message is to conceal problems until they can be assigned elsewhere, deflecting blame and accountability.  Comparative numbers will always be positive because baselines and references will be skewed to always reflect a “good news” story.

If the culture seeks and rewards legitimate quality and the identification and correction of root causes, then problems will be sought and recognized.  Integrity will be championed and whistleblowers will not fear for their jobs or reputations, but appreciated as contributors to quality improvement.

The best data manipulators will eventually be caught and called out.  People have an inherent pride in their work, and when they are constantly being asked to overlook or conceal findings and details, they will become jaded and cynical.  In such situations, the “rats and weasels” will thrive and prosper, tainting the overall culture to adopt similar traits.  “Eagles” will soar elsewhere.

The cultural differences between organizations that penalize and reprimand employees for reporting bad news, and those that embrace the opportunity for improvement are revealed over time by the order of magnitude in product and service quality and employee engagement.

What is our solution?  Repel the contrived approach and always strive toward legitimacy and integrity.  Work as if your family’s safety was dependent on your efforts and decisions.

Posted in  All, Planning for Quality, Team Building | Tagged , , , , , | Comments Off on Performance and Culture – Contrived vs. Legitimate Quality

Don’t Wait, Take – A Smart Approach to Testing

It is common for testing to be given constraints around schedule, budget, team members/skills, and even tools.

So, when you are asked to step out of this box and propose what you think is needed for the next project, a few responses might come to mind:

[Relief] Finally! I am going to get more people, better tools, and we are going to do this thing right!

[Defensive] What we did last time worked fine, didn’t it?  Didn’t it?

[Cynical] You are just going to cut whatever I number I give you by 30% while holding me to my original effectiveness targets.

[Disbelief] Is this a trick?  You really want to know?

Or, depending on the company and its philosophy toward quality, you may experience the opposite; every project you are being asked to improve, to be better than last time.

In either case, you are being asked to stand and deliver: increase testing capability and coverage, reduce turnaround time, find those important defects earlier, etc.

“Good Enough” Testing?

“When do we stop testing?”
“When quality is good enough.”
“So…When do we stop testing?”

Let’s assume that “good enough” quality can be interpreted as “sufficiently valuable or fit-for-use”.

The project wants to reach this goal for its system or product as quickly and as cost efficiently as possible.  How can testing help?

Let’s start with:

  • Define quantifiable quality criteria for the project
  • Capture the risk tolerance for the project
  • Provide flexibility around skills/techniques/tools
  • Avoid a one-size-fits-all approach to testing by providing options

Also, we can work at getting stakeholders to believe that:

  • An upfront investment in testing for a project can actually pay off within that project lifecycle
  • A larger test effort doesn’t automatically bring an appreciable benefit for the added cost
  • Cutting or squeezing testing does not ultimately save time or money
  • A critical project is a good place to try new things (after all, aren’t they all critical to someone?)

Option A – I Choose You?

Given that each project is unique and has its own needs in terms of quality, we need to design a custom approach to testing by:

  1. Establishing a stakeholder agreed on definition of what is quality for your project
  2. Identifying and analyzing your risks, mapping testing activities where possible
  3. Proposing a business case for each test strategy option (eg: What do I get for that price?)

When you create your options, start by considering the basic three: A-Light (quick and dirty), B-Balanced (thoughtful), and C-Heavy (overkill?).

And, in each option:Approach To Testing: Options Table

  1. Map the risks to be addressed/evaluated by testing to test activities (and vice versa)
  2. Estimate effort, cost, schedule, future value, confidence/effectiveness, etc
  3. Solicit and incorporate feedback

But most of all think about where the “smart” is, in each of your options…

Two Sides to Being Smart

“Are you being smart with me?”
“Rather that than the alternative…”
“You’re doing it again!”

One of the crucial components to being “smart” in a constrained situation is to be able to correctly decide or select which things are “must-haves”, which are “should-haves”, and which are “nice-to-haves”.

How you sort these things depends a lot on what is your appetite for risk and what you are willing to pay.  To make the trade-offs clear to stakeholders, we can create options for testing as business cases and place them on the Total Cost of Quality curve.

Approach To Testing: Total Cost of Quality Curve

Another dimension to “smart” is innovating in order to shift the quality curve to the right, thereby maintaining or reducing the overall cost while increasing quality.

Approach To Testing: Shifting the Quality Curve

Examples:

  • A test harness to drive a trillion+ transactions through an underlying sub-system where the data is generated on the fly from a set of transaction schema templates and situational test instruction sets (How else would you do it?  Through the GUI?  Ha!)
  • Smoke test automation creates the seed data that the manual testers will use for further system testing
  • Strategically managing the regression test suite of a mature product such that the overall volume of tests to be executed grows only marginally for each release (or not at all)
  • Happy-path data entry is automated for each form via a “quick loader” button on the screen such that the tester can use the data as-is or manually tweak it as needed before moving to the next screen in the test scenario
  • Model-driven testing technique (+ tool) is used in complex areas of the system to extract the minimal set of test scenarios needed for maximum coverage

With “smart options” based on quality criteria, risks, and project constraints in front of stakeholders, useful discussion is possible.  Choices can be made with awareness of the implications in terms of trade-offs or opportunity costs.

Note: In creating your options, you are really trying to find the “right-fit” option. Therefore, any initial option can be “looted” for pieces to be merged with others on the way to creating that final, agreed, right-fit approach.

The Smart-Fit is the Right-Fit

It is a very human tendency not to change until it becomes more painful to continue with the status quo than to finally make the needed change.  Then of course, something has to be done, and fast.

“Change before you have to.” – Jack Welch

Don’t wait to be asked to change.  Instead, let’s be steadily evolving and ready to take the next step at each opportunity, asked for or not, by:

  • Assembling each testing approach option as a business case
  • Adding “smart” by drawing from your backlog of testing improvement ideas
  • Adjusting and converging to the chosen approach

Et voilà!  Today’s testing approach will meet the needs of today’s project, and in a more efficient and effective way than yesterday’s – The smart(est)-fit for the right now.

For related reading, check out these articles:

Posted in  All, Business of Testing, Planning for Quality, Risk & Testing, Test Planning & Strategy | Tagged , , , , , | Comments Off on Don’t Wait, Take – A Smart Approach to Testing

Improving the User Experience in a Fast-Forward World

Recently, after experiencing a “service not available” error while checking my email, my first instinct was to go scouring the web, including sites such as downrightnow.com, to check to see if others were experiencing similar issues and if there was an active outage. Even though the outage was no more than 15-20 minutes, I found myself feeling frustrated and wanting to immediately switch to a more reliable email provider. I started thinking that my life depended on getting access to my mail. As someone who has been on the receiving end of these complaints from users because of slowness, or outage issues, I found it strange how I and my expectations have changed over the years. In a world running on fast-forward, with so many things competing for our attention, delays while doing anything shows up as frustrated tweets, blogs and Facebook posts. In this article, I focus on the need for “The Best User Experience” and some overlooked, but easy, fixes for improving the user experience.

Putting numbers on user frustration, a paper by Amazon in 2007 showed that for every 100 millisecond (ms) increase in page load time for Amazon.com, there was a 1% decrease of sales [3]. Google and Yahoo have done similar studies which show similar results whereby ad revenues decrease significantly with increased page load times [4]. A study by Akamai in 2009 [5], showed that 47% of users expected a page to load in under 2 seconds and, if it took longer than 3 seconds, 57% would abandon the site. In addition to that, one criterion that Google uses for ranking websites is the page load time [6]. More recently, the website for the “Patient Protection and the Affordable Care Act” (a.k.a. ObamaCare) was all over the news due to its inability to handle more than a few hundred users at a time during the initial enrollment period. Since the deadline had to be met to ensure the law’s success by enrolling as many users as possible, users were encouraged to visit the website during off hours and be set up in an email queue where users would be alerted when the site had fewer visitors so that they could come back and complete the enrollment process. Also, alternative avenues to enroll users were set up such as allowing users to send in paper applications, setting up call centers and so on. For a federal government with deep pockets, all the extra spending is borne by the taxpayer, but for companies for whom online earnings is a major revenue channel, decreased traffic can be a major embarrassment and a big hit on revenue.

Performance improvements can be broadly classified into two main areas, performance improvements on the backend and performance improvements on the front end. I come from a background where most of my projects were related to performance improvements on the backend, whether it be capturing SQL processing times, profiling code and capturing method times for each backend functional component or studying disk access and comparing it with observed times by the user on the front end. Front end user time was considered negligible, and not a cause for concern for a performance specialist. Front end engineering was then unknown, while today, front end development teams are ubiquitous at almost all companies with significant online revenue streams as they have become extremely critical in building out the user experience. They serve as the “first impression” for a potential client.

From my experience, many times, especially after backend response times showed to be reasonable, the user experience seemed slightly worse as pages took time to load and users were kept waiting. I used to go around asking my development and infrastructure teams why the user experience seemed okay at best and if there was a way to quantify where the time was being spent. Nobody was able to provide an adequate answer until I came across an article from Steve Souders, Google’s Lead Performance Engineer, on the impact of front end processing on the response time [7]. What his research has been able to show is, remarkably, that 80-90% of response time is spent on front end performance and that a significant amount of time was spent on downloading HTML, parsing the page, downloading components on the page, parsing CSS, JavaScript (JS) and, finally, page rendering. Google has also developed Google Page Speed [2] for quantifying the time. This turned out to be an eye opener as I checked out sites that I’d worked with previously and all showed time delays on the user interface (UI) side.

Webopedia [1] defines UI as the junction between a user and a computer program. For a web browser (desktop and mobile), the UI includes the web page (HTML, JS and CSS components) which the user interacts with through a pointer device (touch or click). For native (built-in) apps, it’s the touch interface with the buttons being the primary method of interaction for the user. Native applications are applications designed for a particular Operating System (OS).

Modern browsers act like mini OSs and, when I use the term mini OS, I am not claiming that it will replace the OS on your computer or all its functions, but trying to highlight complexity. We are trying to do more and more within a browser, resulting in using the browser as a central point for everything whether it be bookmarks, default username/password, plugins for applications such as MP3 players, video downloaders, etc. Instead of installing separate applications for everything, we try to do everything through browser plugins.

At a high level, a browser consists of seven components [8]:

  1. User Interface: Includes the address bar, menu options, etc.
  2. Browser Engine: Serves as an interface to query the rendering engine and control it
  3. Rendering Engine: Responsible for displaying the content (painting the page)
  4. Networking: Used for making requests across the network, such as HTTP requests
  5. UI Backend: Contains the interface to draw widgets, such as combo boxes, buttons, windows, etc.
  6. JavaScript (JS) Interpreter: Parses and executes JavaScript
  7. Data Storage: Serves as a persistence layer storing data, a.k.a. browsers own light database where values such as cookies etc. are stored (the actual data repository is the hard disk)

The following is a diagram adapted from html5rocks.com [8], which shows the described components of a browser:

browser components

As can be seen from the diagram, even though a browser has seven components, most of the attention is typically focused on performance improvements for the networking part by making sure that all request and response times on the server side meet specified Service Level Agreements (SLAs). While extremely important, there are six other components that impact the user experience. Facebook [9] found that almost half their page response times are spent on JS interpreting and page rendering, not to mention downloading of all components. Different browsers have different rendering and JS engines which impact performance. More recently, browsers have focused on improving JS engine performance by focusing on features such as multi-stage Just In Time compiling (JIT) and Ahead Of Time (AOT) compiling.

When a user sends a request through the UI, an HTTP request is sent through the networking layer, and the networking layer responds with an HTML document. The rendering engine receives the reply from the networking layer, and starts parsing the HTML document, converting the different elements to Document Object Model (DOM) Nodes in a tree. The engine parses the style data, both in external CSS files and the style elements. Using this information, a render tree is constructed. After the render tree is constructed, it goes through a layout process where exact nodes are given exact coordinates where they should appear on the screen and then, the final step is the painting of the page, where, in this stage, the render tree will be traversed and each node is painted using the UI backend layer. The rendering engine is single threaded and only the network processes are multi-threaded. Network operations are performed by several parallel threads and the number of parallel connections can be set; different browsers (user agents) have different allowances for parallel downloads (ranging from two to eight). For a chart on the maximum parallel downloads per browser, you can find it in [25]. To get an excellent review of how browsers work in detail, you can find it in [8].

Most web developers and infrastructure engineers rest on the assumption that the slowness observed on the front end would be negated with client side caching, as most users would have all their components downloaded during the first visit to their site. The problem is one of perception, because once a user sees the initial page load as slow, they don’t seem to want to come back. As mentioned previously, for any company where their online presence serves as their main revenue channel, this is a huge performance and revenue hit.

Now that we have established the need for performance improvements on the front end, following are a few thoughts to try to improve front end response times:

  1. Leverage browser caching by using expire headers [10, 11]: Downloading resources every single time that a webpage is visited is slow and expensive in terms of time and bandwidth. If these resources will be used again, it would be a waste to download the resource content once more if it hasn’t changed much.

While this would seem obvious, a lot of static content is not set up with expire headers. Expire headers will help the browser to determine if a resource file has to be downloaded anew from the server or obtained from the browser cache. While this will only help if the user has already visited the website at least once, subsequent page loads will be faster. As of July 2014, most modern websites have an average of 90 downloads per page with a page size of 1829 KB (1.8 MB). As the size and number keep increasing, even if the resources are downloaded within fractions of a second, the numbers add up, and it is absolutely essential to reduce the number of HTTP requests for each page.

Expire headers are set for files that don’t change often, namely images, JavaScript and style sheets. Of these, only the style sheet files are changed most often and JavaScript and images are changed less frequently. Expire headers can be set in the htaccess (HyperText Access) file found at the root of the website.

  1. Using a Content Delivery Network (CDN) [12, 13, 14]: CDNs allow for wider distribution of static content, allowing static content to be pushed to networks closer to the user. This will allow for faster downloading of content for the user, thereby reducing page response times. In addition to that, CDNs improve global availability and reduce bandwidth as you don’t have to host the content, reducing the need for you to store static content to deliver to the end user’s browser. If the user is a mobile user, there is even more uncertainty as a lot depends on the speed of the wireless connection.

Having said that, if all your users are local to your area where your site is hosted, then CDNs may not be of much use to you. Cost is always a prohibitive factor but if your site has users around the world, using a CDN is very beneficial in reducing page times. Examples of popular CDNs are Akamai, Limelight networks, Amazon Cloudfront, EdgeCast [26], etc. CloudFare and BootstrapCDN (to name a few) are free CDN providers [26].

  1. Avoiding redirects or minimizing redirects [18]: Redirects on a page lead to additional HTTP requests/responses and delays for loading the web page and, therefore, roundtrip latency. This happens when the page has been moved to a different location, if a different protocol is used (for example, changing HTTP to HTTPS) or if you want to direct a user based on their geolocation, language or device type. It is essential to minimize the use of redirects so that delays are minimal.

Have the application update URL references as soon as the location for resources are changed so as to avoid a costly redirect. Ensuring that redirects happen on the server side instead of having the client redirect will avoid the extra HTTP call. Also, avoid multiple redirects from different domains. For example, if you wish to search something on Google, the only options that show up are based on region/country for the most part (google.ca, google.cn, google.fr, etc.). They don’t use googlesearch.com even though their main business is search. Google has become synonymous with search, and it’s become so common to mean search, that both Oxford and Merriam-Webster added Google to its dictionary in 2006 [32]. The reason companies normally allow redirects from multiple namespaces is to save namespace and prevent others from taking it and, also, for giving users flexibility. But it leads to more costs in the end in terms of buying and maintaining the additional namespace(s) and can lead to the user confusing your business name with your actual business.

  1. Compress your website [19]: Compress all static content on the website using Gzip, as the browser will uncompress the content by itself. Gzip reduces the size of your website files just as you compress files on your hard drive using a zip program. When a user visits your website, the server responds with gzipped content and, as soon as the browser receives it, the browser will automatically unzip the files to display them to the user. Gzip can be set in the htaccess file found at the root of the website. Because different web servers have different settings for this, I would recommend Patrick Sexton’s article [27] to find out the setting for the web server that you are using in order to properly set up Gzip.
  1. Reduce the number of CSS and JS files and minimifying code [18, 21]: Try to combine your JS and CSS files into fewer files or else the number of HTTP requests/responses for each file will add to the overhead and, in effect, response times. Even though the payload for your file is a lot larger than splitting it up, it will all be downloaded in one go.

Since size matters, remove unnecessary characters from your CSS and JS files as this can be done without changing any functionality. The characters that can be removed are white spaces, new line characters and comments. These are for readability but are not needed for execution. This process is called minimifcation of code [28].

Tools such as JSMin [29], JavaScript Minifier [30] for minimifying JavaScript and CSS Minimifer [31] can be used for CSS files.

  1. Specify image size and character response type in your HTTP header [15, 22]: If you don’t specify the size of the image when the page is being painted, the browser will first paint the HTML and then will resize for each image based on when the image is downloaded so it will keep painting the page as the images are downloaded one by one to make it fit.

Also, in that same vein, specifying the character set/MIME type that your website uses in the HTTP response will avoid the browser having to figure it out as it parses the content of your HTML because, otherwise, the browser is left with the task of comparing character types to figure this out. This can be done by making sure that your server adds Content-Type header field to all headers.

  1. Move CSS to the top and JS to the bottom [23]: From our discussion on how browsers work, we know that by using CSS, the render tree is built and then the page is painted. Unless CSS is parsed, the page won’t be painted. In order to give a faster response to the user so that the layout of the page happens as soon as possible, it is very helpful to move CSS to the head or to the top of the page.

As soon as the browser sees the <script> tag, it stops loading the rest of the document and will wait until the script is loaded, parsed and executed. Since JS scripts are blocking and are loaded synchronously on most browsers, if possible, move them to the bottom of the page. This way, other components of the page are downloaded and JS will be parsed last. Also, use the DEFER attribute when possible with the script. It is now supported on almost all browsers as it has become a part of the specification since HTML 4.01. If other scripts are dependent on a particular JS file, it will be executed only after the entire page is parsed.

  1. Persistent Connections [16]: If a lot of connections have to be opened and closed for downloading each and every file on a webpage starting from the HTML, it will add to the response time as web pages contain a lot of files. So in order to save time doing a TCP handshake on each request, it is extremely useful to be able to use one connection for the entire conversation. For this purpose, HTTP Keep Alive needs to be enabled both on the browser and the server to allow for persistent connections. All browsers, by default, use persistent connections these days. However, on the server side, this is not necessarily enabled at all times. So the web server may close connections as soon as the first request is complete. You can instead set up Keep Alive to the server htaccess file for persistent connections.
  1. CSS Sprites [17]: CSS sprites have been around for several years now but the concept is well worth repeating because of the tremendous benefits it brings for performance. A CSS sprite is essentially one large image file which contains all the images for your page. So when a user goes to your web page, when an image is to be displayed on the screen, only the one image file is downloaded. This saves both bandwidth and time as there is just one HTTP request response instead of having multiple HTTP requests/responses for each image.

All images are hidden by default and, to display a particular image on the screen, all that is to be done is list the image from the CSS sprite file and the position where it is to be displayed.

  1. Predictive browsing/Pre-browsing [24]: Steve Souders from Google has done a ton of research on the use of standard link prefetching, dns-prefetching and prerendering for various links with the assumption that these will be the next target for the user. The drawback is that this feature will not work on all browser versions and if the user chooses not to go to the prefetched link, it’s a waste of resources.

In prefetching, the browser assumes that a user will go to a certain page, fetches that page and, while in the case of a dns-prefetch, Domain Name Service (DNS) information for the page is collected and stored. In the case of pre-render, the browser renders the page and stores this information within the browser cache so that as soon as the user selects the link, the user is immediately presented with the page.

  1. Using Ajax: Web 2.0 should be leveraged as much as possible. The use of Ajax reduces page response times because simple content changes can be made to the already loaded page with a JSON request/response versus loading the entire HTML once again.

This article only covers some of the easier ways to improve the user experience with minimal changes to your site. Yahoo and Google have done tremendous amounts of research in this area and I would highly recommend all the ways they have suggested for improved performance for your website by checking out [33] and [34].

[1] http://www.webopedia.com/TERM/U/user_interface.html
[2] https://developers.google.com/speed/pagespeed/insights/
[3] http://ai.stanford.edu/~ronnyk/2009controlledExperimentsOnTheWebSurvey.pdf &
http://sites.google.com/site/glinden/Home/StanfordDataMining.2006-11-29.ppt
[4] http://glinden.blogspot.ca/2006/11/marissa-mayer-at-web-20.html &
http://www.slideshare.net/stoyan/dont-make-me-wait-or-building-highperformance-web-applications
[5] http://www.akamai.com/html/about/press/releases/2009/press_091409.html
[6] http://googlewebmastercentral.blogspot.ca/2010/04/using-site-speed-in-web-search-ranking.html and http://searchengineland.com/google-now-counts-site-speed-as-ranking-factor-39708
[7] http://www.stevesouders.com/blog/2012/02/10/the-performance-golden-rule/
[8] http://www.html5rocks.com/en/tutorials/internals/howbrowserswork/
[9] https://www.facebook.com/notes/facebook-engineering/making-facebook-2x-faster/307069903919
[10] http://www.websiteoptimization.com/speed/tweak/average-web-page/
[11] http://gtmetrix.com/add-expires-headers.html
[12] http://en.wikipedia.org/wiki/Content_delivery_network
[13] http://www.webperformancetoday.com/2013/02/22/aaron-peters-turbobytes-why-all-cdns-are-not-created-equal-podcast/
[14] http://www.webperformancetoday.com/2013/06/12/11-faqs-content-delivery-networks-cdn-web-performance/
[15] http://www.feedthebot.com/pagespeed/image-dimensions.html
[16] http://www.feedthebot.com/pagespeed/keep-alive.html
[17] http://css-tricks.com/css-sprites/
[18] http://stevesouders.com/hpws/rules.php
[19] https://developers.google.com/speed/docs/insights/EnableCompression
[20] http://en.wikipedia.org/wiki/HTTP_compression
[21] https://developer.yahoo.com/performance/rules.html#minify
[22] http://gtmetrix.com/specify-a-character-set-early.html
[23] https://developer.yahoo.com/performance/rules.html#js_bottom
[24] http://www.stevesouders.com/blog/2013/11/07/prebrowsing/
[25] http://metadataconsulting.blogspot.ca/2013/03/browser-max-parallel-resource-requests.html
[26] http://en.wikipedia.org/wiki/Content_delivery_network
[27] http://www.feedthebot.com/pagespeed/enable-compression.html
[28] http://en.wikipedia.org/wiki/Minification_%28programming%29
[29] http://crockford.com/javascript/jsmin
[30] http://javascript-minifier.com/
[31] http://cssminifier.com/
[32] http://en.wikipedia.org/wiki/Google_%28verb%29
[33] https://developer.yahoo.com/performance/rules.html
[34] https://developers.google.com/speed/docs/insights/rules?csw=1

Posted in  All, Automation & Tools, Risk & Testing, Test Planning & Strategy | Tagged , , , , , , | Comments Off on Improving the User Experience in a Fast-Forward World

Accessibility Testing: Four Tips for Doing It Right

If you are feeling a little overwhelmed by the extra effort involved in delivering accessible software, don’t be dismayed. Here are some helpful tips to keep in mind.

1. Embed Accessibility Testing

The purpose of the first round of guideline verification is to document defects and create a backlog of issues that need to be addressed. By embedding the accessibility testers in the project team, you will have the benefit of seeing the burndown of their work on a daily basis, and you’ll get that information to the team in the most efficient way possible. The quicker the information flow, the more time to resolve the issues.

2. A Bug Is a Bug

The defects that come as a result of the guideline verification should be triaged the same way as all other issues your team encounters. There is a tendency to treat accessibility issues differently, but resist the urge—a bug is a bug.

If there is a sound reason to separate them for reporting purposes, and if you have the ability to configure your defect management tool, create a category titled Accessibility and include an option to designate the severity, which could be correlated with the impact on Level A, AA, or AAA compliance.

3. Managing Defects

All defects should have a priority classification. If an accessibility defect is not serious enough to affect your level of conformance, fixing it can wait.

Depending on how many accessibility defects are reported during guideline verification, your product owner may want the ability to run a separate sprint to focus on accessibility. If the accessibility defects are prolific, consider handling them the same way your organization handles technical debt.

Once your teams understand that conformant code is required and how to implement coding practices that support accessibility, consider including the verification as part of your “done” definition.

4. The Accessibility Statement

The best way to tell your users you have incorporated accessibility features is an accessibility statement. The statement exists not just to tell users at which level of conformity your site has been verified, but also to let them know that you’re committed to providing a great experience for all users.

During initial verification, your product may not conform to its intended level. The accessibility statement also allows you to be transparent with what you’re doing to address known defects.

You might find the World Wide Web Consortium’s Web Content Accessibility Guidelines and this accessibility statement generator site helpful as you prepare your own accessibility statement. Keep in mind that it should include:

  • The level of conformity to which it was tested (Level A, AA, AAA, or other)
  • The level of conformity to which it complies
  • The exceptions (defects) preventing it from conforming to its intended level
  • Contact information or steps to report accessibility issues

These tips will allow for an efficient and long-term accessibility testing initiative and result in a happy experience for all users.

For related reading, check out:

 

Posted in  All, Other, Planning for Quality, Test Planning & Strategy | Tagged , , , , , , | Comments Off on Accessibility Testing: Four Tips for Doing It Right

Selling Accessibility Testing and a Plan to Get Started

During a sales meeting, a question on accessibility was asked. “How do we talk about the importance of accessibility testing without fear becoming the main motivation to act?”

I had to admit that in all my years running accessibility testing practices, fear was never an emotion I thought was elicited. Sure, there were stories of companies being sued over inaccessible sites, but that was a fleeting consideration where I worked.

To stay competitive, companies are releasing more frequently than ever before, and in the interest of time, sometimes good coding and testing practices are trumped by the desire for more features. To add yet another testing component without seeing its value could be viewed as a roadblock to delivery.

By showing the value of accessibility and having a plan in place to address those needs, you can demonstrate to employees that accessibility is about more than compliance; organizations that are proactive about accessibility will reap benefits in terms of a larger user base and goodwill within the community.

Getting Started

The Worldwide Web Consortium (W3C) provides four principles and twelve Web Content Accessibility Guidelines that can be used to design and test for accessibility. Success criteria are listed under each guideline, and each criterion is labeled as Level A, AA, or AAA.

A software product can be considered minimally accessible, and therefore “conformant,” if it meets at least all Level A success criteria. Meeting all Level AA success criteria is more stringent, and meeting all criteria suggested by the W3C earns Level AAA conformance.

You’ll want to determine which level is appropriate for your organization and users, but here’s the gist:

  • Level A: A good place to start. This is great for organizations that already have a product in use and who want to establish a baseline for accessibility conformance.
  • Level AA: The next step. This level means that most people will be able to use your site or product in most situations. Many education and government agencies require this level.
  • Level AAA: The most difficult to achieve and maintain. In rare situations this level may be required, but the W3C makes it clear that it is not possible to satisfy all Level AAA success criteria for some content.

Testing the Guidelines

Now that you’ve determined which level of conformance you will seek, you’re ready to start the guidelines verification process. Our teams use the W3C checklist to create and execute tests, first manually, then by using a screen reader. There are many screen readers available, but here are a few we have used:

In addition to reporting on the checklist, be prepared to provide recommendations for how to adjust content and presentation to meet the requirements of the guidelines.

Now that you know how to take the first steps in your accessibility journey, you have an understanding of the effort required to be conformant. I challenge us to remove the unknown and replace it with an attitude of user advocacy.

Posted in  All, Other, Planning for Quality, Test Planning & Strategy | Tagged , , , , , | Comments Off on Selling Accessibility Testing and a Plan to Get Started