Hospitals nationwide increasingly use an AI transcription tool powered by OpenAI’s Whisper model to record and summarise patient meetings. While this tool shows promising results in easing doctors’ documentation, researchers have raised concerns about its accuracy. Evidence suggests the tool sometimes “hallucinates” – a term for AI systems producing information that sounds plausible but is incorrect. In these cases, Whisper has been shown to generate completely fabricated phrases, which may be particularly troubling in medical settings.
Widespread use of Whisper in healthcare
According to ABC News, the transcription tool is developed by Nabla. This healthcare tech company estimates its software has processed approximately 7 million medical conversations across more than 30,000 clinicians and 40 health systems. While many doctors and healthcare providers report that the transcription tool improves efficiency, Nabla acknowledges the model’s potential for inaccuracies and states it is working to address the hallucination issue.
Whisper’s hallucinatory responses can produce errors that range from inserting random, unrelated statements to inventing medical conditions that do not exist. Nabla has confirmed its awareness of these limitations and reassures its clients that it is improving the model to ensure greater accuracy in clinical settings.
Study reveals concerning hallucinations in transcriptions
A recent study by researchers from Cornell University, the University of Washington, and other institutions explored Whisper’s performance under various conditions, including during moments of silence or while working with people affected by language disorders, such as aphasia. The researchers found that the model occasionally inserted sentences or words without input, creating phrases that had no basis in the conversation. Examples of these hallucinations include fabricated conditions and irrelevant comments, such as “Thank you for watching!” – a phrase likely drawn from Whisper’s exposure to millions of hours of YouTube videos during its training.
Harms perpetuating violence involve misrepresentation of a speaker’s words that could become part of a formal record (e.g. in a courtroom trial); we present 3 subcategories of examples: physical violence, sexual innuendo, and demographic stereotyping (4/14) pic.twitter.com/zhsKxI2qNs
— Allison Koenecke (@allisonkoe) June 3, 2024
The study highlighted that Whisper hallucinated in about 1% of the transcriptions, a seemingly small percentage but one that can have serious implications in healthcare. While researchers primarily used samples from TalkBank’s AphasiaBank, they argue that the tool’s tendency to generate content during silent pauses could affect various clinical situations, especially communication difficulties.
OpenAI’s response and ongoing research
OpenAI knows these issues and has responded to researchers’ findings with promising ongoing improvements. OpenAI spokesperson Taya Christianson emphasised that the company is actively refining Whisper to reduce hallucinations. OpenAI has also set strict usage guidelines for its API, advising against using Whisper in high-stakes decision-making contexts without additional checks. OpenAI’s model card for Whisper advises developers against applying it in sensitive areas where accuracy is critical.
Despite Whisper’s potential as a transcription tool, its limitations may leave healthcare providers hesitant to rely on it entirely for medical documentation. For now, hospitals and clinicians may need to review transcriptions thoroughly, especially in sensitive situations where accuracy is paramount.