Ask nearly anyone about the process of developing software, and somewhere in the answer they’ll (hopefully) mention the users. User research, user testing, user feedback – the end user is at the heart of everything that we build. However, for many companies, making conversations with real users actually happen is a real challenge – especially if you don't have a UX specialist on your team! If this is all sounding familiar to you, then I have a recommendation: take it into your own hands. In this session, we'll talk through setting up a basic user testing program and growing it, so you – the developer – can feel empowered to start usability testing for your own product!
Usability Testing Without a UX Specialist
AI Generated Video Summary
Usability testing is effective for uncovering user pain points and desire paths, as well as revealing loopholes, shortcuts, and hacks. Finding diverse users for testing can be challenging, but reaching out to sales and support teams and offering incentives can help. The logistics of usability testing include having multiple people to run tests, disclosing recording methods, and considering in-person or remote testing. During the tests, it's important to encourage participants to think out loud, ask open-ended questions, and gather feedback for improvement. Collecting and summarizing usability test results involves analyzing raw data, gathering hard data, and avoiding biases.
1. Introduction to Usability Testing
I'm Katherine Grayson Nanz, a Developer Advocate at Progress Software. In a previous job, I was one of two designers on a team. The app was built by developers without much design input, resulting in a poor user experience. We weren't talking to our users and were making educated guesses about their needs. Despite limited resources, we combined experience with research to implement usability testing successfully. Making conversations with real users happen can be challenging due to organizational and resource constraints.
Hi there. I'm Katherine Grayson Nanz, a Developer Advocate at Progress Software. As a developer with a design background, I've often been in jobs and situations where I get to do a little bit of everything. To the point where just saying I wear a lot of hats can feel like a bit of an understatement. But honestly, I really enjoy that type of work.
In one such previous job, I was one of two designers on the team. We had one full time designer, and then myself, splitting my time between design and development. The company was a startup. And they had a great app idea that they had built, and proven, and gained a group of loyal customers with. However, the app was built entirely by developers without much design input at all. And the poor user experience was starting to become an impediment to growing the customer base.
As new tasks were getting assigned and discussed, we quickly discovered an issue. We weren't talking to our users at all. There was a hyper focus on adding new features and growing the functionality of the application. But no actual data suggesting that users wanted these features. Meanwhile, users that we had were struggling to use the existing features due to the complex and unintuitive user interface. In meetings, our discussions often included phrases like we think, assuming that, and hopefully. We were making educated guesses about our users, but we didn't actually know for sure.
We needed to close the loop, but we didn't have any UX specialists and we were working with a startup budget. I had a little experience from helping run usability tests at a past job, but those were larger established programs where I wasn't in a leadership position. And yet, as it turns out, expertise is relative. And relative to the rest of my team at the time, I was the one who knew the most about what a usability testing program looked like. And if this was something that we felt strongly about as a team, and it was, we decided that was just going to have to be enough. We would combine experience with research to figure it out as we went and find a way to make it work. And guess what, we did.
The idea of usability testing is one that most folks will support and agree with. However, for many teams and companies, making conversations with real users actually happen can be very challenging. And when this happens, we often think it must be due to a lack of understanding about the importance of user testing and that the problem we need to solve to start user testing is getting buy-in from other people. While this can occasionally be the case, I've often found that there already is a strong understanding of how valuable the feedback would be. The struggle, as it was in my previous company, was more with organization and resources.
2. Usability Testing Basics
Usability testing is most effective when focused on testing a specific flow, task, or feature. Choose a task or chain of related tasks that will guide the user through the part of the application you want to test. The flow should have a clear starting and ending point. You want a task that a user can complete in one sitting, which generally means about 20 to 30 minutes. It can be challenging to find users, but asking for 30 minutes of their time is easier than asking for two hours. There are two main types of tests: observing the user unguided or walking the user through a semi-guided flow.
It's much more common for me to hear things like, yes, we usability testing, but we just don't have any budget for it. People want the chance to sit down with users, but they're not sure how to make it happen. And because it's not an absolute requirement in order to ship, it gets continually bumped in favor of more urgent tasks. This usually happens in situations where there hasn't really been a strong history of design process and where there are very few or no UX professionals on the team who can really take ownership of the task.
So, if this is all sounding familiar to you, then I have a recommendation. Do what we did and do it yourself. And yes, in an ideal world, this would be the responsibility of a UX designer or researcher. But for many teams like the one I was on and maybe like the one that you're on, for various reasons, that situation just can't be a reality right now. So, in this talk, I don't want to discuss the ideal big budget perfect world scenario. Because there are lots of resources that already exist for that. Instead, I want to talk about how we can still make usability testing possible for startups, small companies, or personal side projects that might not have lots of money or people for this work. Think of it more like DIY usability testing.
No matter what your situation is, I'm here to assure you, running basic usability tests is something you are completely capable of and it's going to come with the bonus of making you a better more empathetic developer as well. The first major hurdle that we'll need to address before we can consider anything else is just figuring out what we want to test. And no, the whole app is not an option. Usability testing is going to be most effective when it's focused on testing a specific flow, task, or feature. Take a moment to consider what questions you want answered about your app. Have you noticed more support tickets coming in? Is there maybe a place where you're not getting the interactions you anticipated? Have you been receiving negative feedback? Maybe you're getting ready to launch something new. All of these are great jumping off points. You want to choose a specific task or a chain of related tasks that will user through the part of the application that you want to test. The flow should have a clear starting and ending point. The ending points don't necessarily need to be exactly the same for every user, but you should still clearly know when a user has satisfied the task requirement. Some examples of flows that you could test are onboarding, searching and saving, exporting an asset, updating the user profile, et cetera. You really want a task that a user can complete in one sitting, which generally means about 20 to 30 minutes. You can potentially test multiple flows, but you'll still want to keep the entire testing session at no more than about an hour long. The longer the time commitment, the bigger the ask that you're asking your users to give you, right? That's a lot of time that you're asking of them. It can be challenging to find users to begin with, but it's a lot easier to ask someone for 30 minutes of their time than for two hours. There are two main types of tests that you can run. In one option, you'll observe as the user moves through an established flow, completely unguided. The second option is to walk the user through a brand new flow, kind of semi guided.
3. User Pain Points and Desire Paths
In usability testing, you can observe user pain points, misunderstandings, and divergences between developer assumptions and user behavior. The established flow situation involves letting the user complete a task without interference, revealing desire paths where users create their own flows. Users' creativity can uncover loopholes, shortcuts, and hacks. This valuable information provides insights into user goals and how they use the application.
In both situations, you'll have a chance to see user pain points, misunderstandings, and places where your assumptions as a developer diverge from the reality of user behavior. In the established flow situation, you might say something like, walk me through your process for printing John Doe's time tracking report. And then you sit back, observe, and let the user narrate and show the way that they complete the task without any interference. This is super interesting. Because it can show us the desire paths. Desire paths are user flows that a user creates for themselves when an official way of accomplishing their task hasn't been provided or isn't convenient. Just because you didn't intend your app to be used in a certain way when you built it doesn't mean that a user isn't finding a way to make it do that thing anyway. Users can be so creative.
4. Usability Testing Insights
And these tests are fantastic for revealing various loopholes, shortcuts, and hacks that your users have come up with. This is incredibly valuable information. In the new flow situation, on the other hand, there's a little bit more back and forth. This approach is ideal for testing new features or for working with folks who have never seen or used your software before. It allows you to see your application through fresh eyes, which is so valuable. Once you have an idea of what you want to test, it's time to find some test subjects. Most of the time, you want a mix of established users and people who have never seen your app before. But ultimately that depends on what you're testing.
And these tests are fantastic for revealing various loopholes, shortcuts, and hacks that your users have come up with. This is incredibly valuable information. Beyond just new feature ideas, this can also tell us a ton about the high level goals of our users. Think of your app like a tool in the user's tool belt. If you've given them a wrench and they're using it to hammer things, you just got some great new insight into the kinds of problems that they're solving.
In the new flow situation, on the other hand, there's a little bit more back and forth. So that might sound something like show me how you would look up flights departing from the Asheville airport on June 1st and then they'll do it and you'll go awesome. Now, can you walk me through your thought process for how you choose a flight? They tell you and you get to go, great. Now, what would you do to book that flight? In this situation, you're stepping the user specifically through each part of the task you want them to complete and asking questions along the way. This approach is ideal for testing new features or for working with folks who have never seen or used your software before. It allows you to see your application through fresh eyes, which is so valuable. After all, as developers, we know how it's all supposed to work. We have the entire back story around how a feature was brainstormed, scoped, developed, tested. We know all the little compromises and adjustments that we made along the way. And that means we can really become a little nose blind to it. Like someone who's been in a room with a candle burning all day versus someone who just stepped in from outside. It's almost impossible for us to get ourselves back to that true beginner state of mind. But thankfully, we don't really have to. We can just talk to some true beginners and have them tell us about their experiences.
Once you have an idea of what you want to test, it's time to find some test subjects. Most of the time, you want a mix of established users and people who have never seen your app before. But ultimately that depends on what you're testing. Something like an advanced search feature that's targeted at super users, you could probably skip new user testing for that. Whereas, an onboarding flow, skip your established users. A redesigned UI theme, you're going to want a mix of both. In the ideal scenario, of course, you have budget to throw at this. And you get to hire someone to get that perfect mix of users all lined up for you. From all ages, genders, experience levels, abilities. But in this realistic scenario, maybe you have a very limited budget or even none at all. You can still do this, but it's going to take a little bit more work.
5. Finding Users for Usability Testing
Reaching out to your sales and support folks is a great place to start for finding established users to test with. To cast a wider net with established users, try posting open calls on social media or adding banners or modals to your website. New users are a little bit more challenging because they absolutely will require external motivation to participate in something that otherwise they really have no investment in. If you have a small budget, trying something like $20 gift cards or a free catered lunch can do a lot to entice participants. Ultimately, you want to try and achieve as diverse a group as possible.
Reaching out to your sales and support folks is a great place to start for finding established users to test with. Both of those teams can often quickly come up with a handful of potential candidates based on their personal interactions with your users. These tests can often be organized quickly because you already have a pool to draw from. You have the demographic information, you have an established point of contact, and you know that these users have a vested interest in your brand and the improvement of your app.
This is a fantastic place to start, but we don't want to stop there. To cast a wider net with established users, try posting open calls on social media or adding banners or modals to your website. Often it can help to kind of sweeten the pot here by offering something in exchange. This doesn't have to be money, although it certainly can be. But something like free trials, discounted rates, company swag and more, can all be really powerful motivators without needing to cut a check. Besides, you've been meaning to clear out that closet full of T-shirts and stickers anyway, right? This is the perfect opportunity.
New users are a little bit more challenging because they absolutely will require external motivation to participate in something that otherwise they really have no investment in. Those free options can still work here, so definitely give things like discounted rates or T-shirts and stickers a shot. Friends and family members can really be good options here, too. You just have to make sure to create a level of separation so that you're not the one who ends up running the test for your own mom. As you can imagine, that creates some biased results. Great for an ego boost, not so much for honest feedback.
If you have a small budget, trying something like $20 gift cards or a free catered lunch can do a lot to entice participants. Even with just $100, you could still get a handful of folks willing to do a quick usability test for you. If you have a modest budget, investing in a third party service like a panel agency or a market research recruiter can be extremely valuable. Those folks will help you connect with specific subsets of user types, which means you can get better, more accurate results that more fully reflect your user base. If that's not an option for you, you can still try and connect with these user groups on your own by reaching out to community centers and organizations that cater to those groups. A gift of volunteer hours or public promotion of their cause can really help here as well without needing to spend. Promoted posts and ads on social media are also pretty reasonable in terms of cost and will allow you to target very specific demographics that you might otherwise struggle to reach.
Ultimately, you want to try and achieve as diverse a group as possible. Consider age, race, gender, disability, orientation, identity, and experience level as you gather users, but also remember that perfect can be the enemy of good. Any user testing you're doing is better than none, and when you're working with limited resources, the ideal might not be possible yet. The long-term goal here is to establish a valuable usability testing program that shows results, so that more money, time, and resources can be allocated to this program down the road when they become available. Think of this as step one. When your testing group isn't diverse, though, you do have to take the results with a grain of salt, and remember that they're not reflective of the community. The data is still useful, the conversations are still informative, and the process is absolutely still worth doing, but the results should not be held up as absolute objective truth.
6. Logistics of Usability Testing
While reaching out to gauge interest and get potential candidates for testing, it's time to start nailing down the logistics of the test itself. Ideally, you'll want at least two people to run the tests. They don't need to have ever done this before. It's best to have multiple people to run tests, so that the workload of proctoring can be divided. In addition to the person giving the test, you might also want a second silent observer to be present during testing. No matter what you choose, any recording methods need to be disclosed to the user and their acknowledgement and agreement must be gained before the test can begin. You'll also need to consider how the test is being administered in person or remotely. An in-person test has the benefit of allowing close observation of the user. On the flip side, it does mean that you need a physical space to test in. A remote test has the benefit of flexibility.
While you're reaching out to gauge interest and get potential candidates for testing, it's time to start nailing down the logistics of the test itself. This is the part where folks can get a little hung up because there are lots of details you might not have considered if you've never done this before. So let's break it down.
Ideally, you'll want at least two people to run the tests. They don't need to have ever done this before. You just need folks who can be polite and who are comfortable speaking with users. It's best to have multiple people to run tests, so that the workload of proctoring can be divided. Left to just one person, it's a lot of work. It can slow down the scheduling of tests, and it can eat into their ability to finish their normal workload. In addition to the person giving the test, you might also want a second silent observer to be present during testing. This person will not participate beyond simply watching and taking notes, but this allows the proctor to focus entirely on the user. If you're able to take a video and a screen recording, you might be able to skip this, but it's still really nice to have a second person there to kind of break down the session with afterwards and compare impressions. If you choose to record the session, that could be as simple and cheap as a microphone and a webcam in the testing room, which is equipment you probably already have. Or you might opt for something a little bit more complex like heat map or cursor tracking software installed on the testing machine. Mostly this comes down to budget. Is the extra data that you can get with additional recording methods worth the cost? You can hold useful usability testing sessions with no recording equipment at all so don't let this become a blocker. It's just an extra tool that you can use to make your life a little easier and to gather additional data from each session. No matter what you choose, any recording methods need to be disclosed to the user and their acknowledgement and agreement must be gained before the test can begin.
You'll also need to consider how the test is being administered in person or remotely. Both of these have pros and cons so there's not necessarily a right answer. An in-person test has the benefit of allowing really close observation of the user. This allows you to pick up on things like body language or facial expressions that might get lost on camera. It can also feel much more conversational and relaxed to test in person and an at ease test subject tends to be more talkative and will give you more information. Testing in person allows you to provide the computer for testing which gives you a little bit more control and helps eliminate the variables of personal user device configurations. On the flip side, it does mean that you need a physical space to test in so if your team works remotely or your office isn't nearby or easily accessible, that could be a challenge and potentially require a little money. Consider reserving a meeting room at your local library which tends to be either free or donation based or at a co-working space which is relatively affordable. In-person testing means that your users might not have access to their usual assistive devices though so keep in mind that this can limit your accessibility testing. A remote test has the benefit of flexibility. This allows you to reach more users at times that are more convenient to them which can help you find more people to test with. It also gives you the benefit of testing in the environment where the user is most likely to actually be using your software and on the device that they'll be using in real world situations.
7. Challenges and Logistics in Remote User Testing
Remote work can come with communication and technology challenges. It can be difficult to get permission from users to record, especially if they're using work-supplied devices. Consider the logistics of running tests during work hours and the potential bias it may introduce. Choose a configuration that aligns with the goals of your test.
However as I'm sure we've all learned over the last few years, remote work can come with communication and technology challenges. It can sometimes be more difficult to get permission from users to record, especially if they're using work-supplied devices or calling in from their corporate office. If your feature has not launched to the general public yet then you'll need to figure out the technical aspect of getting the user access on their personal device.
Regardless of which approach you choose, there are some logistics that are universal. If you're planning to run tests during standard work hours remember it's a big ask to make users take time off from their job and possibly commute to you in the middle of their work day. If you're running tests outside of standard work hours, consider the need for child care. All of these choices could unintentionally bias your results if you don't consider them. So, when you run tests during the workday at a location that's not easily accessible via public transit then you've automatically restricted your testing audience to people who have cars and people who can easily take time off. If you bring users into your office and run tests on a super desktop computer with a large screen monitor and a strong Wi-Fi connection you might not be getting the most accurate data for an app that will be primarily used on tablets with a weak data connection. Consider the goals of your test and choose a configuration that makes the most sense for your needs.
8. Running Usability Tests
After all the preparation, it's time to run the tests. Start by introducing yourself and the purpose of the test. Emphasize that it's not a personal evaluation and encourage honest feedback. Ask participants to think out loud and provide examples. Open the floor for questions and obtain consent. Begin the test with basic identifying questions and clearly explain the task. Use open-ended questions to help participants when they get stuck, but avoid giving direct answers. Encourage them to work through problems as much as possible.
After all that, it's finally time to actually run the tests. This part is easier than you think. I'd like to write out kind of an outline or a script so I have something to reference and make sure I don't leave anything out while I'm running the test.
You'll start by introducing yourself and thanking the user for their time. If you're offering any kind of compensation or gift, explain when that will be distributed. So like, you'll get the t-shirt and sticker on your way out or we'll email you the code for your free two-month subscription.
Then, you're going to talk them through the goal of testing. So, today we're going to test a new feature in the app or today you're going to look at a piece of software that you've never seen before. It's crucial to assure the user this is not testing their personal ability in any way. Remind them there are no wrong answers and the only thing being tested here is the software itself. Emphasize that you want their honest feedback even or especially when it's negative. Reassure them that they won't hurt anyone's feelings and they're not going to make anyone upset. Let them know that you won't be able to help or guide them while they're completing the tasks, that even if they ask you really can't tell them the right answer. This might seem mean, but it's actually really important for you to see how they solve problems without external guidance.
Then, you'll ask them to think out loud as much as possible while completing the test. Give an example like, okay, right now I'm looking for the search bar and yep, there it is. Okay, now I'm going to enter the name of my teammate. It feels silly and awkward at first and there's no getting around that. It's just how it is. Assure them that it will feel more natural as they go and that it's really important and helpful. Finally, open the floor to any questions from them. Take as long as needed here to make sure they feel completely comfortable before moving on. This is also the time to ask their permission for anything you need, like recordings, and get verbal or written consent.
Once that's all good and you hit the record button and officially start the test, begin with some basic identifying questions that will give you context on this data for later. That usually includes things like name, age, how long they've been using the software, how long they've been in the industry, etc. Consider what aspects might impact the data you're collecting, and as long as it's not too invasive, ask away! When it comes to the task itself, tell the user clearly and simply what you want them to do. When they get stuck or hung up, which is almost inevitable, ask open-ended questions to help move things along. Try not to guide the user towards a specific answer or give hints, just kind of offer something to get them thinking, like, what are you looking for on the page right now? If they ask for your help, tell them to imagine that they're at home or on their own, ask them what they would do in that situation. If it really gets to the point where they just cannot move forward and tell you they would call support or maybe file a ticket, if the test comes to a complete standstill for some reason, you can give them a little prompt in the right direction, but you really don't want them turning to you for the answer the moment they feel a little adrift. Encourage them to work through the problem as much as they can.
9. Conducting Usability Tests
A little bit of silence is OK, but if it goes on for too long, remember to prompt the user to think out loud. If a user expresses surprise or disbelief or frustration, one of the most useful things that you can use as a response is asking, what did you expect to happen here? Once the task has been completed, ask any follow-up questions and widen your focus to their general impressions and emotions. Ask the magic wand question to gather ideas for improvement. Check if the user has any questions and thank them for their time. Review and analyze the raw data by grouping it by question or task and looking for patterns.
A little bit of silence is OK, but if it goes on for too long, remember to prompt the user to think out loud. Something as simple as, what are you working on? Will usually do the trick here. If a user expresses surprise or disbelief or frustration, one of the most useful things that you can use as a response is asking, what did you expect to happen here? This will tell you about their assumptions and the ways that they've been conditioned to use software, which again, can be really invaluable for seeing where your assumptions don't match up with the user's assumptions.
Once the task has been completed, go ahead and ask any follow up questions that you might have. This is a great opportunity to widen your focus to some of the more high level stuff, so things like their general impressions, their feelings and emotions while using your software. It's also a chance to ask the user to share any thoughts or feelings that you might not have asked about specifically. So usually I say something like, beyond what we discussed, did you have any observations or thoughts that you want to share? Of course, feel free to follow up and drill down as far as needed here to get specific answers. I also like to ask the magic wand question here too. So if you had a magic wand and you could change any part of the software that you just experienced, what would it be? This tends to open the floor to things like, wow, it sure would be nice if I could... Sometimes those are wild cards, they're things you would never do. But sometimes they're really cool ideas that you might not have thought of. Check to see if the user has any questions. They'll almost always ask, when will this be available? If you're testing a new feature. So make sure you have a good answer for that prepared in advance. And don't make any promises that you can't keep. It's also totally OK to say, we're not sure yet. But we'll make sure to keep you posted. Once everything's done, thank the user again, genuinely for their time and end the session. Of course, once you finish conducting the tests, you're not really done. You've got the raw data now, but for it to be useful to your team, you need to review it and form some conclusions. It can feel hard to know what to do with all of this data. Since so much of it is observational and not necessarily easy to just throw into a spreadsheet. Here's my approach. First, I like to group by question or task. You ask all of these users the same set of basic questions. Or you ask them to perform the same tasks. So, that's a good starting point for dividing out the data into parsable chunks. Start compiling the notes for each user as organized by question or task. Then, look for patterns. This is where we can really start to put the puzzle together.
10. Collecting and Summarizing Usability Test Results
Seeing users follow the same flow without speaking to each other indicates how your user interface is being interpreted. Collect hard data such as time to complete tasks, heatmap data, and clicks. Summarize noteworthy findings into a one-page document for quick review. Don't let one user's struggle or success overshadow others. Involve as many people as possible to avoid biases. Running usability tests gives you a unique perspective as a developer and helps check internal biases.
When multiple users take the same action or struggle in the same place. Seeing a lot of different people follow the same flow without speaking to each other is a really strong indication of how your user interface is being interpreted. Similarly, if a bunch of people fumble or make mistakes at the same point, even if they're different mistakes, that's a red flag.
Then, collect the hard data. Just because you don't have much hard data doesn't mean you don't have any. Look at time to complete tasks, heatmap data if you have it, clicks, etc. And see what the numbers have to say in correlation with the experiences that you witnessed.
Finally, collect any note-worthy quotes, findings, and data into a one-page document. Other folks want to see the results in the test. But let's be realistic. They're not going to read every page of notes about every user. Take the most impactful stuff, summarize it, and put it into a document for quick review and access. My main tip here is to not let one user's struggle or success overshadow the others. Don't rush to change things because one person didn't understand it. And also don't refuse to change things because one person did it well. It's easy to look for the users and tests that will confirm our own biases. But that's not actually helpful when it comes to building better software. True objectivity can be hard, which is why it's ideal to involve as many folks as possible in the testing process, so that multiple interpretations of the data and experiences can all be heard.
It's not too scary, right? It had lots of parts, but they were all things that could be broken down in smaller achievable steps and totally something that you can do. I really hope you feel empowered and dare I say, excited to get out and talk to some users. You will be amazed at how watching a user navigate your app gives you a unique perspective as a developer, one that you will keep with you as you tackle future work. It teaches us to resist the urge to make assumptions and it helps check our own internal biases. Running usability tests isn't just great for your software, it's also great for you. And I really hope that this session helps you feel empowered to give it a shot.