This story is entirely based on true events, it’s not a ‘How to Do an Unmoderated Remote Usability Test?’ article.
Usability test is a method used to improve a certain user flow, or it can be used to make it easier for users to find information or interaction they are looking for while using a website or application.
Every website gives a lot of different information to its users. Some of these may be more important than others, some may be short, and some may be long. On the other hand, sometimes the user may need to take action as a result of the information provided.
Recently, we were discussing about how we can improve our way of showing information and usage of modals on our platform with our Product Manager.
On our platform, to display detailed information, we use modals or accordions.
Even though there was a certain pattern for using modals and accordions, we needed to find a better way to create a better consistency and also improve the experience of our users while searching for an information.
Examples of usage types below:
1. Accordions — If we need a quick way to show some information (or the information we want to give is more important than rest) we use accordions:
2. Information Modals — On the other hand, if we are giving more detailed information (or the information we want to give is less used than rest) we use modals:
3. Action Modals — We also use modals for actions, too:
As you can see in the examples above, sometimes you need to click on the arrow to find the information you are looking for, and sometimes you need to open the 3-Dot Menu. On the other hand, for actions, such as cancelling a money transfer, or approving a transaction, you also need to open the 3-Dot Menu.
Different types of interactions for the same task, or interactions such as finding information and taking action from the same menu, both were creating inconsistency and reducing the usability of the product. That’s why we needed a common solution across all pages of our product that we could use for most of the situations covering such cases, ensuring consistency and improving the user’s experience.
After a benchmark and some explorations, we prepared 3 different solutions involving all of the possible usages I explained above (most important information, less used information, actions).
After building the prototypes in Figma, I imported them into Useberry and created a usability test.
Why did we choose to do an unmoderated usability test?
In every solution, we gave the participants 5 different tasks. This makes 15 tasks in total. Even though the tasks take really short time to complete (a participant completed one of the tasks in only 0.1 seconds), it could take a lot of time if we would do it moderated.
How did we understand if the participant got confused?
Useberry has a screen recording feature on Beta and it works really great. We were able to watch the screen recordings of the tests. Of course it is not the same with capturing the participant’s face expressions or getting questions when she/he gets confused but still, you can get insights from their clicks and task completion times.
How did we ask the questions on our mind?
When you use the Multiple Tasks test, you can also add multiple choice and open-ended questions, and even Opinion Scale and Likert Scale questions.
Each method and tool has both pros and cons. To summarize briefly:
Pros
Using Figma - Useberry integration is super simple. Once you have the prototypes in Figma, you can import them into Useberry with only one click.
Being anonymous reduces the stress on the participants.
You can test more than one variant in less time than a moderated testing.
You can specify a specific page or paths for a task to be considered completed.
You can easily get the results in a systematic and detailed way, so you don’t have to analyze everything by yourself.
You can watch the screen recordings of the test whenever you need.
You can filter the test results according to your needs very easily and also create segments to reuse them again.
Multiple Task template is very useful and comprehensive, it allows you to add various blocks (card sorting, preference test, single task, etc.) in the same test.
Cons
It is difficult to measure the thoughts and feelings of the participants while performing the tasks, as you cannot simultaneously observe the participant’s facial expressions and what she/he is doing.
You can gather less insight because the participant cannot ask questions when she/he is confused.
After the participant starts the test, they can leave the test and get other work in between. Some participants may have completed a test in 2 seconds, while others may have completed it in 29 minutes. 🙃 In such a case, screen recordings may also be interrupted.
The mental and environmental conditions of the participants while performing the test are also very important, but you do not have the chance to observe this.
Since Useberry gave an error in a couple of tests, they looked like dropped-off because the participants could not continue even though they started the test.
Unfortunately, being able to specify a specific page or paths for a task to be considered completed is not always sufficient. The user may have come to that page accidentally or unconsciously and you may not realize it.
Since the identities of the participants are anonymous, we cannot make a comparison between participants with different roles using our product.
The test results do not reflect the reality exactly due to the many reasons I have listed above.
In conclusion, comparing the pros and cons, I would definitely prefer this method again if I need to test and analyze multiple variants of a short task in a short time.
Thank you for reading.
Comentarios