Beyond the Usability Lab

Pet Supply Site Online Usability Study

by Donna Tedesco
Originally posted January 31, 2010

This study was a comparison between PETCO.com and PetSmart.com, two competitor pet supply websites. The study design for both websites was the same, meaning that the instructions, tasks and questions were all the same for both sites. The goal was to create a direct comparison between both sites on usability metrics.

Both studies were created and administered in RelevantView. Each participant was randomly assigned to one study or the other (between-subjects design) via a javascript function coded on the main page. The code is as follows:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
   "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<script>
   function Coin_Toss() {
   coin = Math.random()*2
 if (coin <= 1) {
   window.location=("http://surveys.relevantview.com/aslahysifozaeoo.asp")
   } 
 else if (coin <= 2) {
   window.location=("http://surveys.relevantview.com/aslohoxudadanex.asp")
   }
   }
   
</script>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<title>Web Usability Study</title>
</head>
<body onload="Coin_Toss()">
</body>
</html>

The Coin_Toss function that occurs "onload" of the page (immediately upon going to the URL) sends participants to one or the other of the two links listed.

The first page is an introduction to the study. It states that the study will take approximately 15 minutes, and ensures participants that they are not being evaluated. It also thanks them and emphasizes that their participation is important.

Intro page

The next page begins starter questions. (Note that this study had no screener questions; anyone was eligible to participate.) The starter questions provided some background information on the participants and allowed us to parse the data by a few characteristics. The first asks participants' familiarity with the website:

Familiarity with site question

The next question asked participants how many pets they have:

Number of pets question

If participants entered zero, they continued to the next part of the study. If they entered anything else, they were branched to another question asking what types of Pets they have:

Types of pets question

This information was asked to get a better sense of the characteristics of the participants, but could also be used in conjunction with task data. For example, if a task pertaining to finding ferret food turned out with a much higher task completion rate than expected, the starter question data might reveal that the majority of participants were very familiar with this website and also happened (by unlikely coincidence) to own a ferret.

When progressing, participants received an instructions page informing them that they will receive two windows, and how to answer a task question:

More instructions

Next, they received the task window at the top of the screen, and the website (either PETCO or PetSmart depending on how they were routed) in the lower screen. The task window included a progress indicator showing "Task X of 4" and the task. When participants found the answer, they would click the "Done with this task" button (we customized the name of this button):

Task and website windows

The way RelevantView is designed dictates that participants enter answers on a separate page, so they are required to remember the answer momentarily. However, if they forget or want to go back to the website, they are able to click on the "View Task Again" button (we customized the name of this button):

Task answer

RelevantView is set up so that the task time measure only includes the time that a participant spends looking at the website itself (and more time is appended for each duration that a participant clicks the "View Task Again" button and views the website more).

Each task had four multiple-choice answers and a Give Up/Note Sure response. Participants were required to choose a response and click the right arrow button (>>) to move on.

After each task, participants received a post-task question asking ease-of-use on a five point scale, in the form of a multiple-choice question:

Task ease rating


If the participant entered a 3 or higher, they would move on to the next task. If the participant entered a one or two indicating difficulty with the task, they were branched to another question, which was a follow-up open-ended question asking about their experience:

Open-ended question

Upon pressing the arrow button to move forward, they encountered a page indicating that they were about to start a new task:

New task

Note that although it's not harmful to show a page in between tasks (aside from creating an extra click), it was a technical constraint of the RelevantView software. At the time the study was created, there was not a way to progress from the last question directly to two windows for the next task.

Upon hitting the next/arrow button, participants received Task 2:

Task 2

Note that for each new task, participants were brought back to the homepage. As part of the study setup, we specified the starting URL for each task. Note that at the time of study setup, RelevantView did not provide a way to start participants where they last left off in the website (which we sometimes want for studies).

The same progression of screens - task answer, post-task questions, etc. - continued for the rest of the tasks.

After the last task and post-task question, participants were routed to the System Usability Scale (SUS) questionnaire:

SUS questionnaire

As discussed in Section 3.8.3 of the book, we included a speed trap to catch any mental cheater participants. Question 6 asks, "To verify you are taking the survey, please select the middle choice, 3."

After answering the SUS and continuing, participants were asked their likelihood to use the website in the future:

Likelihood of future use

After answering this question, participants were then taken to the final part of the study which asked demographic information, including gender and age group:

Gender

Age group

Finally, participants progressed to a simple thank you screen. Because there was no participant incentive, there wasn't a need to give additional information at the end of the study.

Thank you

There were four tasks total in the study, listed below. Tasks were listed in the same order for all participants and both websites.

Task 1

(PETCO): You remember the ferret food you like to buy starts with a "Z". Find out the weights of the two bags of this food available for purchase.

(PetSmart): You remember the ferret food you like to buy starts with a "Z". Find out the weight of the bag of this food available for purchase.

Task 2

(PETCO): Find the street of a PETCO store that is in or nearest to Marietta, Georgia and offers grooming.

(PetSmart): Find the street of a PetSmart store that is in or nearest to Marietta, Georgia and offers grooming.

Task 3

(Both PETCO & PetSmart) Find out how much it costs for a ground (regular) shipping of a [PetCo/PetSmart] gift card.

Task 4

(Both PETCO & PetSmart) Find out how many types of pine cat litter are available.

Note that the tasks were the same for both websites, but the answers were slightly different because we were testing two different websites.

Below shows what the study setup looked like from our (the researchers) perspective in the RelevantView tool:

Setup

We were able to break each part of the study into a different "section" with the RelevantView tool. The properties were set so that not only would task answers be collected, but task times as well.


Comments? Contact Donna@BeyondTheUsabilityLab.com

Beyond the Usability Lab Homepage