← Back to Blog
November 2024 / Technology

Voice AI for Clinical Conversations: Lessons from 10,000 Triage Calls

What we've learned about building AI systems that can conduct safe, effective clinical conversations over the telephone.

James Morrison

CTO, Medelic

Building voice AI for healthcare is different from building a general-purpose assistant. The stakes are higher, the conversations are more nuanced, and the margin for error is smaller. Here's what we've learned from analysing thousands of real clinical triage conversations.

Why Voice Matters

Despite the proliferation of digital health tools, the telephone remains the dominant channel for accessing primary care. There are good reasons for this: it's universal, accessible, and doesn't require digital literacy or smartphone ownership.

For many patients - particularly the elderly, those with disabilities, or those in crisis - a phone call is simply the most natural way to seek help. Any AI system that aims to improve access must meet patients where they are.

The Technical Challenges

Voice AI in healthcare faces unique technical challenges:

  • Accent diversity - The UK has enormous dialectal variation. Our system needs to understand speakers from Glasgow to Cornwall, plus non-native English speakers.
  • Medical terminology - Patients rarely use clinical terms. They say "I've got a funny turn" not "I experienced syncope." The AI must understand colloquial health language.
  • Emotional context - Patients calling about health concerns are often anxious or distressed. The system must recognise and respond appropriately to emotional cues.
  • Background noise - Calls come from busy households, streets, and workplaces. Robust speech recognition in challenging acoustic environments is essential.

Safety by Design

The most important lesson from our work is that safety must be built in from the start, not bolted on afterwards. This means:

  • Conservative escalation - When in doubt, the system should escalate to a human. False positives are preferable to missed red flags.
  • Explicit uncertainty - The AI should acknowledge when it doesn't understand something and ask for clarification rather than guessing.
  • Clear boundaries - The system should never provide medical advice or diagnosis. Its role is to gather information, not to make clinical decisions.
"The goal isn't to create an AI doctor. It's to create an AI that can reliably gather the information a doctor needs to make good decisions."

What's Next

Voice AI in healthcare is still early. We're continuously learning from every conversation, improving our models, and expanding our capabilities. The technology will keep getting better - but the fundamentals of building safe, effective clinical AI won't change.