Usability testing is focused on measuring how well users can complete specific, standardised tasks, as well as what problems they encounter in doing so (Cooper et al,. 2014). We carried out two formative user testing sessions our prototype as part of the iterative process.
You can view the user tests that I carried out here.
What we needed to test
Because our product was very form and information heavy we needed to ensure that the labelling of headings, forms and buttons made sense to the user.
Each step of the ticket purchase process how multiple areas containing different types of information. It was important to the information was grouped in a meaningful way and that the areas are placed in the areas of the page that the user would expect
First-time and discoverability
Because there were a number of new features being introduced in our prototype we needed to ensure that adequate information is given to the user that they understand how each thing works or better can we remove instructions about how something works
Meet our usability goals
The usability goals we set out were that the prototype was to be efficient to use, easy to learn and easy to remember how to use. If our prototype fails in any one of these areas then the usability of the prototype has failed.
How we did user testing
We each carried out user testing on 2 participant and we tested a total of 12 subjects. We used screen recording software to do this. For my testing I used a mac app called Silverback. This app allowed me to record the screen while also capturing video and audio of the test participant.
Collaboratively analysing study findings
Although some conclusions are going to be obvious, a formal analysis is necessary to get to underlying causes and to extract the most value from the interviews (Kuniavsky, M., Goodman, E., & Moed, A. 2012).
We utilised Nielsen’s severity ratings system to organise the priority in fixing the various issues that came out of the testing. Severity ratings can be used to allocate the most resources to fix the most serious problems and can also provide a rough estimate of the need for additional usability efforts (Nielsen, 1995). We carried out severity ratings by each copying our test results findings into a single document for each test. We the rated the issue from 0-4 , four being the most severe usability issue and 0 being a non-issue. We then added up the rating we each gave for each issue and got the mean. If the severity of the issue was 2 or higher we would action it. We also factored in frequency of the issue occurring.
You can view the full test findings for both user tests below:
Limitations of our usability tests
Although we did gather very important feedback in our usability tests, the performance metrics were not establish early on to gather any definitive conclusions about the tasks’ effectiveness. That said, I believe that our second test was very rigid as we kept to a script (view here) so that finding that came of our testing could be analysed accurately.
In the next post
In the next post I will outline the changes that were as a result of the user testing findings and give examples of how we improved the user experience in every step of the user journey in our final iteration of the prototype.
- Cooper, A., Reimann, R., Cronin, D., & Noessel, C. (2014). Chapter 2: Understanding the problem. In About face: The essentials of interaction design (4th ed.). Indianapolis, Ind.: Wiley.
- Nielsen, J. (1995). Severity Ratings for Usability Problems: Article by Jakob Nielsen. Retrieved January 1, 1995, from https://www.nngroup.com/articles/how-to-rate-the-severity-of-usability-problems/
- Kuniavsky, M., Goodman, E., & Moed, A. (2012). Chapter 11: User Testing. Observing the user experience: a practitioner’s guide for user research. Amsterdam: Elsevier.