Homni AI is a stroke assessment tool that leverages image detection AI to assist medical professionals in diagnosing strokes.
While the team had already developed a high-fidelity prototype and conducted some usability testing, users—primarily expert neurologists—reported that they didn’t find the tool helpful. However, the team didn’t fully understand why.
My role in this project was both as a UX researcher. I conducted user research, usability testing, and iterative design improvements that led to significant workflow optimizations.
Timeline
3 months
Role
UX Researcher
Team
1 Designer, 1 PM, 4 Developers
When I joined the team, my role was initially scoped as a designer focused on usability testing and UI improvements. However, once I began conducting usability testing with stroke experts, I realized there was a much deeper problem beyond just UI fixes.
During my initial usability tests, a common complaint emerged: Experts found the tool too slow and not useful.
While usability testing had already been conducted before I joined, it was limited to surface-level UI improvements. The team had been making incremental design changes without fully understanding why users weren’t adopting the tool in the first place. Since the team prioritized quick usability fixes, no foundational research had been conducted to investigate the root problem.
To go beyond usability pain points, I started to understand the problem space by conducting 5 in-depth interviews with our current target user — stroke experts to explore:
Key Discovery: There was a critical mismatch between the tool’s design and the needs of stroke experts. While the tool took ~20 minutes to complete a diagnosis, stroke experts typically diagnose patients in ~3 minutes.
At this point, I asked myself:
🤔Is the tool a bad fit for experts, or can we improve the workflow to make it faster?