Reading UX Testing Reports
Quick Tip!
Understanding a UX report can sometimes be tricky – don’t be afraid to reach out to a UX Team member or staff to help understand the data, feedback, and recommendations and how they translate into tasks for the project team. An example testing report can be found here.
Helpful Staff for this Topic
At the Digital Corps, when the UX Team completes a round of testing (whether it be quality assurance testing or usability testing), they need to share those results with the rest of the project team and staff. Testing reports are the best way to relay information effectively and thoroughly. These reports will share what was found from each round of testing and will include recommendations to improve the project.
Quality Assurance Testing
The first section of the report includes spreadsheets with quality assurance testing results. Typically, the UX Team member will share the results of QA testing as soon as it is finished, but they are often included in a testing report as well. The spreadsheet will outline the interactions tested, who tested them, if there were any issues, and additional feedback. Typically, any issues found during QA testing are resolved before additional usability testing occurs.
Usability Testing
This section of the report outlines testing done with participants. Usability testing involved a UX Team member leading participants through scenario-based tasks with the product, and observing these interactions. Observations, verbal feedback, and other metrics are then used to write the report.
Task Success Ratings
The first section of the Usability Testing report outlines the Task Success Ratings. One of the following ratings is applied to the outcome of each task attempted by the participant:
- 1 = a “successful” task completed
- 0.5 = a”partially successful” task completed
- 0 = a “failed” task
For each task, the report shows the average success rate based on all users. A product that is well made should have a 100% task success rate overall. Any task with less than a 100% success rating should be revisited to ensure the product functions correctly and is easily understood by users. Lower percentages will typically contain an explanation of issues or questions encountered by the participants.
Lostness Index
Lostness is a metric used to determine how lost a participant gets when testing a product. It takes the minimum number of pages/screens that a participant should visit to complete a task, and compares it to the number actually visited. The final score will range from 0 to 1, with an ideal score of 0. Higher scores indicate a participant is more lost using the product. These results will be displayed in a bar graph, typically with a lostness average for each task. The team should evaluate any task that has a lostness index closer to 1 to ensure that the product is easy to navigate and understand.
System Usability Scale
System Usability Scale is a questionnaire given to participants after all tasks are completed. SUS contains ten questions that allow participants to give feedback on the usability of the product. The questions are as follows:
- I think that I would like to use this system frequently.
- I found the system unnecessarily complex.
- I thought the system was easy to use.
- I think that I would need the support of a technical person to be able to use this system.
- I found the various functions in this system were well integrated.
- I thought there was too much inconsistency in this system.
- I would imagine that most people would learn to use this system very quickly.
- I found the system very cumbersome to use.
- I felt very confident using the system.
- I needed to learn a lot of things before I could get going with this system.
These questions are answered on a 1-5 likert scale, with 1 being strongly disagree and 5 being strongly agree. The UX team member will use the scores from participants to calculate a final System Usability Scale score. The final score will be out of 100, but it is not a percentage. The following is the scale used to determine the meaning of the score:
- > 80.3 is an A: users love your website and will recommend it.
- Around 68 is a C. You’re doing OK, but improvements could be made.
- 51 or under is an F. Make usability a priority now and fix fast.
Task by Task Ratings
This section of the report outlines each task and results from each aspect of testing. Each task rating includes:
- A description of the task.
- Time on task: the average amount of time it took a participant to complete a task.
- Efficiency rate: the ratio of the time taken for successfully completed tasks in relation to the total time taken by all participants.
- Lostness Index: the average lostness index for each task (see above for details on lostness index).
- Post-task Difficulty Rating: The average score from all participants. It is rated on a 1-7 scale with 1 being “very easy” and 7 being “very difficult.”
Tasks that have scores out of the ordinary in any of these categories should be evaluated to see where participants had trouble and why.
Problem Statements
Problem statements are one of the most important parts of the UX report. Based on the data collected and observations made during the testing, the UX Team member will identify the problem areas and recommendations for how to address them. This data will be presented in a table with the following topics:
- Description: An explanation of what the problem is, as encountered by the testing participant.
- Assessment: An explanation as to why they believe this is an issue, backed up by data presented within the report.
- Severity: A ranking as to how prominent an issue is
- 1 superficial errors – does not need to be fixed unless time is available
- 2 minor – low priority
- 3 major – important and should have high priority
- 4 catastrophic – must be fixed immediately before release
- Design Defect: An explanation as to why this issue is a design defect. This is a technical description of exactly where the issue took place, what happened, etc.
- Screenshot: A screenshot or screen recording of the issue (if applicable)
- Effect on User: A description on how the participant was impacted by this problem. This could be how they felt because of the issue, or how it impacted their experience overall.
- Recommendations for Improvement: How the UX Team member recommends the issue be fixed.
- Participants who encountered this problem: The number of novice and expert participants that experienced the issue.
- Representative comments from participants: Quotes from testing participants that further show or explain their thoughts on the issue.
- Link to audio/video example: If there are any recordings of testing, a link to the recording with a time stamp for the encounter of the issue will be included.
The testing report will end with an overview of testing, which could be useful to share with clients or other personnel that do not need the granular results, but rather a high-level summary. Any outstanding issues or concerns will be noted here.