top of page
Jack D. Davis
Game / UX Designer | Team Leader | MBA
Usability Testing
Usability testing is evaluating a product or service by testing it with representative users. In the video game industry, this is known as playtesting, and it follows a very similar process.
Following are artifacts from a usability test I planned and facilitated while at The MathWorks. This session was done in-person, but there are equally effective methods to do such a test 100% remotely using screen-sharing and collaboration software.
Test Objectives
First, I worked with the team to clearly define the test objectives. Why are we doing this? What does the team want to learn or evaluate? Are there specific concerns we are trying to allay or pains we are trying to uncover?
It's important to get this step right, because it informs all your actions that follow: the participants you recruit, the tasks you ask them to do, and how you analyze, interpret, and report the findings.
I also define testable predictions (what do we expect to happen) and the data we plan to collect.
It's important to get this step right, because it informs all your actions that follow: the participants you recruit, the tasks you ask them to do, and how you analyze, interpret, and report the findings.
I also define testable predictions (what do we expect to happen) and the data we plan to collect.
Define Tasks
Next, I worked with the team to define tasks we will ask participants to perform in the software under test. The tasks are crafted specifically to shed light on our test objectives.
This artifact is a high-level view of the tasks and is team-facing. Before the actual test, it is put into a script to be read to the participant, and is given a realistic context, such as, "imagine you're working with a colleague on a design problem and they've asked you to handle these tasks..."
This artifact is a high-level view of the tasks and is team-facing. Before the actual test, it is put into a script to be read to the participant, and is given a realistic context, such as, "imagine you're working with a colleague on a design problem and they've asked you to handle these tasks..."
Logistics
More details are defined in the test plan.
How will the team will use the results? Are we validating a design? Looking for new usability problems? These spawn from the test objectives.
Who should we recruit? What kind of participants are we looking for? How will we find them?
How will the team will use the results? Are we validating a design? Looking for new usability problems? These spawn from the test objectives.
Who should we recruit? What kind of participants are we looking for? How will we find them?
Software Under Test
This is a screenshot of the software that was under test: Simulink. Specifically, we were testing a new view in the tool for monitoring computer simulations in-progress and associated tasks.
Facilitate the Test
This is a generic image I found on Google, but it is quite similar to the usability testing labs at The MathWorks. Inside the room are the test facilitator (my role) and the participant. Behind the glass are observers from the project team. In my test, the observers were developers, documentation writers, quality engineers, and other stakeholders.
A key value of running a usability test is having members of the team observe, live! It is often more impactful than the analysis and report-out I provide after the test. Nothing can compare to watching a person experience your product first-hand, and seeing in real-time the issues they encounter.
I ask the observers to take notes as they observe. I ask them to do this on sticky notes using a sharpie: one observation per sticky. We use these stickies in the next step...
A key value of running a usability test is having members of the team observe, live! It is often more impactful than the analysis and report-out I provide after the test. Nothing can compare to watching a person experience your product first-hand, and seeing in real-time the issues they encounter.
I ask the observers to take notes as they observe. I ask them to do this on sticky notes using a sharpie: one observation per sticky. We use these stickies in the next step...
Affinitize
After the session is over and the participant has gone, I gather with the observers for affinitization, a process of grouping like-issues and looking for themes.
We begin by putting all the stickies on the whiteboard and then, as a group, we start putting similar observations together, eliminating duplicates, and making categories.
When that is done, the team takes a few minutes to review all the themes, and then each member votes on what they think are the three most important findings from the study (these are the blue dots).
This isn't meant to be a final analysis or even a decision about what actions will be taken. It's more a method to engage the team and get them thinking critically about the test and come to some initial conclusions on the most important observations.
We begin by putting all the stickies on the whiteboard and then, as a group, we start putting similar observations together, eliminating duplicates, and making categories.
When that is done, the team takes a few minutes to review all the themes, and then each member votes on what they think are the three most important findings from the study (these are the blue dots).
This isn't meant to be a final analysis or even a decision about what actions will be taken. It's more a method to engage the team and get them thinking critically about the test and come to some initial conclusions on the most important observations.
Report-Out
Here are excerpts from my report-out. I go back to the four test objectives, and cover each individually. "Here's what you wanted to learn, and here is what we found out."
This slide lists things that were "observed" and then the number of participants for which the observation was made. For example, looking at the first row, it says 3 out of 5 participants were not able to solve the design problem and failed the task. That's a big deal! This indicated some major issues with the UI design.
This slide lists things that were "observed" and then the number of participants for which the observation was made. For example, looking at the first row, it says 3 out of 5 participants were not able to solve the design problem and failed the task. That's a big deal! This indicated some major issues with the UI design.
Report-Out
Same as last slide, but I have overlaid some actions the team is planning to take to address the the observations.
Report-Out
Observations related to test objective 3 (task 2).
Report-Out
Observations related to test objective 2 (task 3).
Interestingly, 3/5 participants experienced some difficulty with saving and loading a session, but all 5 were able to do it successfully. Should the team spend time improving this aspect of the UI, or focus on more critical efforts? At least now there is some observational data to help make such decisions.
Interestingly, 3/5 participants experienced some difficulty with saving and loading a session, but all 5 were able to do it successfully. Should the team spend time improving this aspect of the UI, or focus on more critical efforts? At least now there is some observational data to help make such decisions.
Report-Out
Observations related to test objective 1 (task 4).
Report-Out
Same as prior slide, but have overlaid the teams feeling about the observations: that the new view was easily discoverable and no improvements are planned related to this test objective.
bottom of page