PulseAugur
LIVE 06:57:42
research · [2 sources] ·
1
research

Ontario AI medical scribes fail audit with widespread errors

An audit of AI-powered medical scribes in Ontario revealed significant inaccuracies, with most approved systems failing tests. These AI tools incorrectly transcribed patient conversations, with 60% misidentifying prescribed medications. The audit also found that nearly half of the systems generated fabricated information or missed crucial patient details, particularly concerning mental health. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Highlights critical safety and accuracy issues in AI tools used in healthcare, potentially delaying adoption.

RANK_REASON Audit report detailing performance failures of AI medical scribes.

Read on Mastodon — sigmoid.social →

COVERAGE [2]

  1. Mastodon — sigmoid.social TIER_1 · [email protected] ·

    Most Ontario-approved medical AI scribes erred in tests: auditor general. "Supply Ontario had the bots transcribe 2 conversations betw health-care workers & pat

    Most Ontario-approved medical AI scribes erred in tests: auditor general. "Supply Ontario had the bots transcribe 2 conversations betw health-care workers & patients. Most of the vendors … had inaccuracies in their results, including 'incorrect information, AI hallucinations, inc…

  2. Mastodon — fosstodon.org TIER_1 · [email protected] ·

    🚨 Auditor General finds that most Ontario-approved medical AI scribes erred in tests. 60% recorded a different drug than what was prescribed. Almost half "fabri

    🚨 Auditor General finds that most Ontario-approved medical AI scribes erred in tests. 60% recorded a different drug than what was prescribed. Almost half "fabricated information," commonly known as hallucinations. # onpoli # AI # AIhallucinations https://www. thetrillium.ca/news/…