The discussion guide is the script that maps out the tasks that participants will attempt and questions you will ask during the test. It usually includes an introduction, directions for the participant, and a rough guide as to what you will say. Here’s how you can build this guide: Write out what you’ll say during the test, including the intro and how you’ll introduce each task. Make participants comfortable and get permission if you are recording the session (and tell them why). When you write the instructions for your test, be sure to practice them on some people so that they are clear and users will understand what you are asking.
Things to say: “We’re testing the app, not you.” I always tell them that I am not the designer but only the tester so they can’t hurt my feelings.
“As we go along, I’m going to ask you to think out loud, to tell me what’s going through your mind. This will help us understand how you think about issues.”
“If you have questions, just ask. I may not be able to answer them right away, since we’re interested in how people do when they don’t have someone sitting next to them to help, but I will try to answer any questions you still have when we’re done. And if you need to take a break at any point, just let me know.”
“With your permission, we’re going to record what happens on the screen and what you say. The recording will be used only to help us figure out how to improve the site, and it won’t be seen by anyone except the people working on the project. It also helps me, because I don’t have to take as many notes.”
When you describe each task, take care not to lead the user. Examples of leading questions are:
Would you rather use the old version or this improved version of the website?
Do you find this feature frustrating to use?
After the tasks have been completed, have the participant reflect on the session and give the overall impression of what they have seen. Take this time to ask them about the things that they liked and disliked the most from the session, and their thoughts regarding the test and solution. Give the participant an opportunity to talk about anything they would like to mention by asking: “Is there anything that we haven’t talked about that you think would be useful for us to know?”
Keep the following guidelines in mind:
Create short, realistic scenarios. If your scenario is very long and complex, you may consider breaking it up into two or three scenarios.
Scenarios should tell a story that motivates your users. Why would they want to use your product in the first place?
Effective tasks contain detailed information. It’s best to write a story surrounding the task. For example, you could write, ”You would like to book an inexpensive flight from Seattle to New York City. You can end up at any airport in New York. Find the cheapest flight leaving next Tuesday.”
Identify specific activities that represent typical tasks that your participants would perform. These might be frequently used tasks, difficult tasks, critical tasks and those that your client thinks are important.
Don’t lead the participant in the direction you want. Don’t introduce bias. Concentrate on observing how the participant completes their tasks.
Act like a therapist. Ask what they’re thinking. Try to figure out if it matches their expectations. Which parts are frustrating? satisfying?
When responding to questions, probe at their expectations. What do you think it does? What do you think you should do next? Where do you think that leads?
Don’t interfere if you can help it. Let people struggle, and possibly fail. Just watch what they do, regardless of what they say. As with interviews, don’t fear silence but let things unfold without assisting users. An exception: If you need participants to complete a task to move on to the next one, help them after giving them sufficient time to complete the task on their own.
Some Factors to Evaluate
Learnability. How easy is it for users to accomplish basic tasks the first time they encounter a design?
Here’s a trick you can use if you are testing a number of tasks, say 6 tasks.
After the first few tasks (after 3 of 6), ask users to rate how easy the product is to use.
Ask them about the last 3 tasks after they have finished. If the ratings are similar, the product was probably very easy to learn. If they rate the second set of tasks as easier, maybe not.
Efficiency. Once users have learned the design, how quickly can they perform tasks?
Memorability. When users return to the design after a period of not using it, how easily can they reestablish proficiency?
Error Management. How many errors do users make, how severe are they, and how easily can they recover from them?
Satisfaction. How pleasant is it to use the design?