Device engineers, builders, and educational researchers have critical considerations about transcriptions from OpenAI’s Whisper, consistent with a document within the Related Press.
Whilst there’s been no scarcity of debate round generative AI’s tendency to hallucinate — mainly, to make stuff up — it’s slightly unexpected that this is a matter in transcription, the place you’d be expecting the transcript intently observe the audio being transcribed.
As an alternative researchers informed the AP that Whisper has presented the whole thing from racial remark to imagined scientific remedies into transcripts. And that may be specifically disastrous as Whisper is followed in hospitals and different scientific contexts.
A College of Michigan researcher learning public conferences discovered hallucinations in 8 out of each and every 10 audio transcriptions. A system studying engineer studied greater than 100 hours of Whisper transcriptions and located hallucinations in additional than part of them. And a developer reported discovering hallucinations in just about the entire 26,000 transcriptions he created with Whisper.
An OpenAI spokesperson stated the corporate is “frequently operating to make stronger the accuracy of our fashions, together with decreasing hallucinations” and famous that its utilization insurance policies limit the usage of Whisper “in positive high-stakes decision-making contexts.”
“We thank researchers for sharing their findings,” they stated.