4
4 Comments

Usability Testing in 4 Simplified Steps

Alt Text

Are you intimidated by usability testing? Don't know where to start? Feel like it's too time consuming or expensive?

Usability testing doesn't need to be a fully-fledged psych experiment with a formal lab, big team, and lots of time and money. In the real world, it can (and often should) be much lighter and faster than that.

Here are four manageable steps that anyone can do to perform a free remote usability test. I'll include some tips to make sure you get actionable data.

1) Start with a clear goal

Alt Text

Figure out the question you're trying to answer with your test. Is there an important flow we need to make sure works well? Is there a new design we want to test out?

Based on that, pick 2 or 3 specific tasks to give the test participant. You can't just say, "Hey, look at this website and tell me what you think." We'll learn much more if we watch them try to accomplish something.

Here are a few examples of tasks we could give the participant:

Too vague: Explore this new feature and tell me what you think.

Too specific: Go to the search bar, type in blue sandals, select size 9, choose the first one, click "Add to Cart" and continue to the checkout.

Better: Buy a pair of shoes for under $40.

2) Recruit participants

Alt Text

Recruiting test participants may seem daunting, but it doesn't need to be.

For starters, we only need 3 to 5 people. Getting more than that isn't worth it because there are diminishing returns on the data.

Focusing on finding representative people. This means people who look like our users and would have a reason to do the tasks we're testing. Stay away from people inside your own company and people you know in real life—they'll be too biased.

How do you find these people? The first place to look is your user base. It's an instant pool of potential participants who care about your product. Look at your customer support and social media channels, or ask people on the app itself. You could also think about where your potential users hang out online (forums, etc.) and see if they'll talk to you.

How do you get people to agree to do the test? A current user will likely be willing to do it for free if the test is quick (15 minutes or less). If you feel like you need to offer them a gift, consider things you could give them from your own company, like a free month. Or send each person a gift card—Amazon is good because all you need is an email address.

Once you've found participants, explain what the test is about and how long it will take. Set up time slots using Calendly and send them the release form if you're using one.

3) Perform the test

Alt Text

Preparation

  • Create a prototype and send it to the participant. If you're using a mockup, you can make it clickable with a tool like Marvel or Invision.
  • Write a script. This ensures we're giving the right information, and eliminates the chances of inconsistencies between tests (which are usually the test facilitator's fault).
  • Get recording software ready, and test video and audio before the session. You'll want to record the test so you can focus on what's happening and avoid having to furiously take notes. Something like Zoom or Google Meet is good enough.

Test day

  1. Welcome the participant and help them relax. You want to take some of the pressure off. Explain you are not testing them, you're testing the site. If they make mistakes it's the software's fault, not theirs; we're here to learn from their experience.
  2. Explain how the test will work. Ask them to try to think out loud as they perform each task. Explain that to ensure conditions are as real as possible, you won't be able to offer them any advice.
  3. Explain the real life scenario that would lead to them performing this task so they can get in the right mindset. Let them read the task out loud and begin.
  4. Remain neutral and silent as the participant takes the test. This is not about teaching them how to use the interface. You're there to listen and watch. Sometimes they may be critical or run into problems, but resist the urge to explain things or prompt them. If they ask you how to do something, reply with “What do you think?” or “I am interested in what you would do.”
  5. After each test, take a step back with the participant and ask, "How'd that go?" If you have specific questions, you can retrace their steps and ask them open ended questions like, "Why did you decide to do that there?" or "What was going through your mind at this point?"
  6. Thank them sincerely. If you offered an incentive, explain how they'll receive it.

4) Analyze the results

Alt Text

Review the recording. Did the participant complete the task successfully and efficiently? If not, what stopped them? What were their key behaviors and comments?

Cross reference and look for patterns between the different participants. Rank the issues, identify solutions, and determine the best course of action moving forward.

Simple is better than nothing

If you remember one thing about usability tests, make it this: doing any kind of usability testing is always better than doing none. Even if it doesn't exactly follow the steps above. Don't put too much pressure on yourself.

You'll keep getting better at this. Once you get into a rhythm of testing, you'll learn shortcuts and boil things down to a process that works for you. It'll become part of your process.

Try it out! Follow the steps in this article and perform your own usability test. See what benefits you gain from it. Feel free to ask me any questions you have, and let me know how it goes on Twitter or Instagram.

If you think others would benefit from this article, share it or give the Twitter thread some love:

From learnuxd.io

  1. 1

    I like that post. Cool overview. I pretty much agree with the approach. I would be interested in how you think one should go from task to data. At the end you want to have something comparable over time so that you can make an informed decision. I came up with an approach for myself, but do not want to influence your answer too much. Let's figure this out together. <3

    1. 1

      Hey Tim! I think my answer is mostly covered in step 4: You want to review the recording and watch for problem points. Sometimes it'll be obviously (like they couldn't find the next step). Sometimes it'll be more subtle (like it took a minute for them to find the link, or they got a confused look on their face and became a little stressed). If you pay attention, you should be able to pick out where the issues are relatively easily.

      Chances are pretty high that the same issues will pop up over the 3 to 5 tests. You'll want to take the things you've learned, prioritize them, and figure out exactly how you're going to do to fix each one. Then it's good to test your solutions again with different people 😄.

      1. 1

        I see what you mean. Based on my own experience this was never sufficient enough for me. I was never happy with "you will find something". Of course I want to pay attention to any subtleties that may bug the user but with that approach I usually ended up with a lot of opinions, feelings and misleading feature requests. This is not digestible. Nobody can prioritize product decisions based on that kind of information. What I want is data. I want numbers. Because everyone understands that 5 is bigger than 3. So what I like to develop is a set of tasks and associated expressions which people rate between 1 and 10. For instance there is some task the test user should go through and the associated expression is "I am thrilled to use the app every day". When you get a 1 you know the user will not use it. When you get a 10 you should have a new customer. You collect the data and can then make informed decisions based on graphs or heatmaps if you want to get all fancy. This data driven approach goes a lot towards psychometrics and you can get all creative with it. So I think going from a feeling to a number is super valuable, even if someone's 3 is another one's 5. Though you get trend indicators and do not end up being fooled by opinion. What do you think about that approach?

        1. 1

          Ah I see what you're saying. Yeah, the more data the better in my book, but I think there are two types of usability testing: qualitative and quantitative. Here's a good article about that from NN/g. In this particular post I'm talking more about qualitative because that's easier and more straightforward to get into in my opinion. But they both have their place.

Trending on Indie Hackers
How I grew a side project to 100k Unique Visitors in 7 days with 0 audience 49 comments Competing with Product Hunt: a month later 33 comments Why do you hate marketing? 29 comments My Top 20 Free Tools That I Use Everyday as an Indie Hacker 16 comments $15k revenues in <4 months as a solopreneur 14 comments Use Your Product 13 comments