Ethan and I met in the 8th Entrepreneur First Singapore cohort and mused how getting early customer feedback before software development starts wasn’t a common practice among digital product teams in Asia. The benefits of doing UX research & usability interviews were obvious, from validating the product need to ensuring that design flaws are caught at an early stage. After all, if it costs $1 to fix a design flaw at an early stage, it costs $100 once the product is launched.
The rework cost from skipping UX research is massive, and yet we found that 80% of the products launched haven’t done early stage testing. We started by talking to 100 customers to find what makes UX research such a difficulty.
As one of our early interviewees put it, “garbage in, garbage out.” Quality UX research is highly dependent on the quality of the testers/interviewees. But finding good people that match your target demographic is hard. A good tester is critical with feedback, articulate in communicating that feedback, and yet also represents the 'average customer.' A lot of the people we spoke to highlighted there are some professional testers out there but the quality of the feedback isn’t representative.
The challenge of finding a good tester is even harder for B2B companies, as identifying participants that represent their customers requires domain expertise. Companies that are serving telcos want testers working with telcos, companies making healthcare products want to talk to doctors, and recruiting for specific expertise takes longer.
While large companies hire agencies to recruit, start-ups and mid-market companies are priced out of the agency model. In essence, these companies either use internal testers or their customer-facing teams, but they don’t represent the end-customer. Thus, finding good testers can take as long as 2 weeks for UX research teams and this presents a big impediment to getting started.
Finding testers is just the start, but coordinating schedules that match testers' availability with internal stakeholders is an ongoing struggle as well. Even when research sessions are confirmed, other than arranging schedules, there are two core problems:
1. No-shows: 20-30% of the testers don’t turn up for their agreed slot.
2. The testers aren’t who they say they are.
These two factors combined mean research teams often have to book additional sessions post-hoc, or build in a buffer of extra interviews to hedge against these frequently recurring possibilities.
It’s common in user studies for a person on the research team to transcribe and take notes of the entire session, and this is just time consuming. If a study has 10 interviews, this person has to transcribe 10 sessions and then take notes. Moreover, the typical way to capture ‘aha’ moments is by scribbling into your personal notebook, and then hopefully remembering to re-transcribe these key takeaways digitally later. Again, while the focus should be on generating insights, a lot of time gets spent in documentation and note-taking.
While taking notes is a major time sink, ultimately the true weight of the research piece lies with the insights you generate to drive future product iterations and core business decisions. Often this includes drawing up reports and including key segments from the video interviews. Often, storing the video interviews and cutting the right segments can be tedious, and not everyone in the team has the right skill sets to work with this raw footage to convert it into actionable research artifacts.
Most researchers we spoke to indicated a strong preference for moderated in-person testing. As one of our interviewees put it, "You really need to see the person’s expression and where they click to really get to the bottom of things and identify the intent behind their actions." However, with COVID making in-person interviews impossible, many research teams have changed to virtual interviews, and that presents its own unique set of challenges. As a research lead at an Indian e-commerce start-up lamented:
“Screen sharing is such a hassle with Zoom or Google Meet, and a lot of time is spent in getting the participant to get the tech setup right. Also we typically share the Figma file with them, but there is a risk of the prototype leaking out to the public before the product is launched.”
On the subject of Figma, over 40% of the Design leads we spoke to shared that they had moved to or were in the process of moving to Figma as their primary design tool. Again, the core reason cited was how Figma unlocked virtual collaboration for design teams and how that was a necessity with most people working remotely.
A lot of interviewees also shared that after the completion of interviews the research team would come together to synthesize the findings and pick-up common themes. This process of collaboration and synthesis has become incredibly difficult with COVID and everyone working remotely. The ongoing challenges with the logistics of doing research have remained, but the collaboration and synthesis elements have become a lot harder with COVID, and this is definitely one problem we wanted to solve.
The other big issue with user research in most companies we spoke to was that it tends to be one-off in nature. You have a hypothesis, you go through all the legwork to do the research, but often the findings you worked so hard to obtain are never referenced in newer studies. There are no central research repositories, so duplicate work ends up happening multiple times within the same org. As a UX researcher in a Singapore Decacorn mused, “the pace of product development is so frantic and there is so much research happening, that often studies are repeated and findings are similar.” He further added, “when you do a piece of research, there are core insights and other related insights, which may not be particularly useful for that one project, but could be gold for other products, and just having the information together in a discoverable way would be extremely helpful.”
“You can’t expect my researchers to navigate 4-5 tools for one piece of research.”
After talking to over 100 customers, and identifying the issues with recruitment, research ops and even synthesis, we assumed there was no good tool to help researchers, but we were surprised that in some senses there are too many tools. This blog post by User Interviews does a great job mapping out the dizzying researching tools landscape, but the more pertinent question is: with so many choices, where do you even start?
As the Design lead of a major e-commerce company in Singapore shared, “you can’t expect my researchers to navigate 4-5 tools for one piece of research.” Therein lies the challenge, while there are tools to meet the different hurdles with UX research mentioned above, they each solve a very small part of the puzzle, and you need to cobble together 4-5 different services, which is too time consuming, and presents its own set of challenges for adoption across the enterprise.
We are creating Betafi as the one-stop platform for UX research. We were over the moon with how Figma unlocked deep collaboration for design teams, breaking down internal silos across stakeholders, and we intend to extend this platform-level ambition to the end-to-end user research process.
We firmly believe that in the long-term product research cannot be the sole responsibility of dedicated research staff, but that product managers and individual contributor designers should be able to spin-up quick studies in a self-serve way to answer their burning questions, while collaborating effortlessly with teammates to drive real business outcomes.
Our ambition is for Betafi to become the platform that democratizes user research and stands out as a delightful, core pillar of the product development cycle for creative organizations worldwide.
We’d love to share what we’re building with your team.