Evaluating the Effectiveness of a TV Voice Assistant Tutorial Screen for Sight-Impaired Users.

Our TV product teams were in the final stages of design for a “tutorial” splash screen to help users learn about features of the built-in TV voice assistant. The tutorial screen was triggered on first time microphone button press (on the TV remote) and showed core TV-related voice commands – users could scroll through this screen to browse commands. We had already tested this design with sighted users and had positive results.

Our UX team felt that a core user group for voice-related features on the TV would be sight-impaired users. We felt that we could add a lot of value in their TV watching experience if we could teach them to embrace voice commands as well. We met with our stakeholders and proposed testing this tutorial screen design with sight-impaired users and get usability feedback.

Disclaimer: To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information in this project.

My role

I led research for this project and I collaborated with a lead designer (based in San Diego) and a software engineer (in Tokyo) for this project.

I scoped the project with my stakeholders by meeting with them over video calls, creating and reviewing the research plan. I recruited participants, designed the scenario and tasks for the study, conducted moderated usability study sessions, analyzed the data to document usability issues, and wrote a research report for the design and engineering team.

Goals

The main goal of this project was to see if sight-impaired users were successful in understanding, navigating and returning to this tutorial screen in future. Based on this, the research questions for this project were aimed at understanding main usability issues with the tutorial page.


Study metrics: Our measures were around task success (browsing through voice command list and successfully closing the tutorial screen) and usability issue severity. I assessed issue severity using a decision tree, based on impact, frequency, and issue persistence.

Participant sample and recruiting

I recruited 7 participants (4M, 3F) to participate in 60-minute usability test sessions. Participants were legally blind (little or no light perception) and were between 30 and 65 years old. All participants were existing Sony TV users, and were familiar with the TV remote. 

I recruited existing users to avoid any learning biases (e.g., learning to use the TV remote) during the study session. I started with 6 - 8 users to see if we had any critical usability issues that we could address before another round of testing.

I sourced participants through an internal database of vision-impaired users, who had opted in as volunteers for accessibility research with my team. I reached out to these users by email and/or phone (based on their preferred contact method) to schedule them for study sessions and provided follow-up information on session time, duration, incentive, office location, my contact details, and to expect me in the lobby to receive them when they arrived. I also sent reminder emails the day before the session.

Research method

I conducted a moderated usability study in our in-house UX lab to evaluate the tutorial screen prototype. After some initial background questions, I did a quick refresher on some of the TV remote control buttons by guiding participants’ hands on the remote and calling out common buttons and functions. I then gave participants the following prompt to start them off:

“Imagine that you’ve setup the TV and you’ve been using it. You remember hearing about the Mic feature on the TV. You decide to explore this. What would you do?”

I asked participants to think out loud and describe in detail as they interacted with the screen/listened to the Screen Reader. I wanted to observe natural interaction with the TV and probe on areas where users expressed confusion or difficulty.

Synthesis

The designer I was working was involved through the study and observed all the sessions. She and I had mini-debrief sessions at the end of each session to talk about main findings, interesting participant behaviors or comments and discuss any additional follow-up questions I had to include for the remaining sessions. We relayed daily updates to our stakeholders in Tokyo and briefed them on findings from the day’s sessions.

At the end of all sessions, I had a larger debrief with the designer and highlighted main themes/usability issues from the sessions. I printed the design flow and pinned it on a whiteboard; during the debrief we went through and noted comments/issues/observations from the study sessions on post-it notes and attached it to the related page the design flow.

Impact

I presented my findings as a PowerPoint report (supplemented with video clips) to the lead designer and the software engineering team. Based on findings from the study, the team revised their design and addressed some of the main usability issues from the study. Main outcomes included:

Reflections

As my first accessibility research session, I learnt a lot about recruiting, etiquette, and running usability test sessions with blind users. 

We were not too far in the design process and we could still make design changes to address usability issues without much of an engineering / design resource impact. I took away from this, to propose accessibility testing at earlier stages. More importantly I saw firsthand the impact we can have on designing our products to be accessible. Empowering sight-impaired users to use voice, for example, can significantly enhance their experience with using the TV by reducing steps to desired outcomes (e.g. finding a TV show or launching an app)..