How to Conduct a Usability Test

Planning the Test

Planning a usability test includes

  • Defining the goals for the study
  • Identifying the types of people with whom to test
  • What testing methods to use
  • What tasks to include
  • And what questions to ask

These decisions go into the study’s discussion guide, which is the script of tasks and questions the facilitator will present to participants. Plan your discussion guide carefully to ensure that the tasks and questions are understandable, occur in a logical order, and help you to avoid bias.

Start by:

  • Identifying your testing goals—what are you trying to learn?
  • Writing task scenarios that support your users’ goals.
  • Recruiting people who are in your target audience.

Dealing with Bias
In this article, Jeff Sauro points out a number of sources of bias and some solutions. Here are some key ones:

  • Hawthorne Effect: “You and all those people online and in the next room are watching my every keystroke, I’m going to be more vigilant and determined than I ever would be to complete those tasks—I’ll even read the help text.”
  • Task-Selection Bias a.k.a. “If you’ve asked me to do it, it must be able to be done”:
  • Social Desirability : Users generally tell you what they think you want to hear and are less likely to say disparaging things about people (seen and unseen) and products
  • Honorariums: Paying people for their time makes sense, but if the honorarium the user receives is the sole motivator, the quality of the data can be questionable.
  • “If you’ve asked me about it, it must be important“: We often probe users on certain actions, selections or things they’ve said. Often users don’t have an opinion or are unsure why they did things. When we ask users about it, we might be summarizing irrelevant information.
  • Recency & Primacy Effects: The Recency Effect is the tendency to weigh recent events more heavily than earlier events. Conversely, when events that happened first are weighed more heavily, it’s called the Primacy Effect. 


To combat the primacy effect, use counterbalancing. If your users are testing two or more versions of a design, vary the order in which participants see the designs. If you’re giving users a list of choices, change the order of choices between participants.

Preparing and Conducting the Test
You’ll need to complete a number of items before starting your test:

  • Find participants. A rule of thumb is to recruit twice the number of participants you will need. There may be scheduling problems and absences.
  • Plan your test’s location or environment and length. Will you use Zoom and Maze, or another method? Test these beforehand and contact your participants to ensure they have the right software.
  • Define the tasks to test and create your prototypes. Practice these with people beforehand and refine as needed.
  • Create your test script.
  • Schedule the tests.
  • Secure compensation for participants if you are offering it.
  • Find your company’s consent form and have users sign it


Some things to remember while testing:

  • At the start of testing introduce the purpose of the test and follow your script.
  • When conducting the tests, stay out of your participant’s way as much as possible.
  • If someone can’t complete a task – and you need them to do so in order to move on to the next task – mark the current task as a ‘fail’ and help them complete it so they can continue.


You’ll find more help about conducting the test in the ‘Creating the Discussion Guide’ section below.

What Tasks Should You Test?

  • Frequently used tasks.
  • Anything that is done in an unusual manner. For instance, using a new icon or introducing a ‘compare’ feature. 
  • Anything your client thinks is particularly important.
  • Anything critical. When we redesigned a call center system, the call center agents didn’t need to use the ‘bomb threat’ button often, if ever. But they needed to know where it was.