AI More Likely to Spread Medical Misinformation from ‘Authoritative’ Sources: Study

0

By Tanveer Ahmed :

Artificial intelligence systems are more prone to delivering incorrect medical advice when the false information appears to come from trusted or professional-looking sources, according to new research.

A study published in The Lancet Digital Health examined 20 widely used large language models, both open-source and commercial, and found that AI tools were significantly more likely to accept and repeat medical errors contained in realistic hospital documents than those found in social media content.

Researchers from the Icahn School of Medicine at Mount Sinai in New York tested the models using three types of data: hospital discharge summaries with deliberately inserted false recommendations, common health myths taken from online forums, and hundreds of short clinical scenarios written by doctors.

After analysing more than one million AI responses, the study revealed that the systems repeated incorrect medical information in about one-third of all cases. However, when the misinformation appeared in professional medical records, the error rate rose sharply to nearly 47 percent.

Dr Eyal Klang, one of the lead researchers, said AI systems tend to prioritise confident and technical language over factual accuracy.

“Many AI models treat medical-style language as reliable by default, even when the information is wrong,” he said. “How something is written often matters more than whether it is true.”

In contrast, the models showed greater scepticism toward social media content. When misinformation originated from platforms like Reddit, AI tools passed along false information in only around 9 percent of cases.

The wording of user prompts also influenced results. AI systems were more likely to accept incorrect claims when questions were framed in an authoritative tone, such as when users claimed professional medical status.

Among the tested models, OpenAI’s GPT systems performed best in identifying false medical information, while some other models failed to detect errors in more than 60 percent of cases.

The findings come as AI tools are increasingly used across healthcare, from patient-facing symptom checkers to clinical documentation and surgical assistance. While AI promises efficiency and improved access to information, experts warn that unchecked medical errors could pose serious risks.

Dr Girish Nadkarni, chief AI officer at Mount Sinai Health System, said stronger safeguards are urgently needed.

“AI can be a powerful support tool for both doctors and patients, but it must verify medical claims before presenting them as facts,” he said. “This study highlights where current systems remain vulnerable.”

The concern is reinforced by separate research published in Nature Medicine, which found that AI-based symptom checkers offered no significant advantage over standard internet searches when helping patients make health-related decisions.

Experts say the studies underline the need for careful regulation, medical validation, and ethical design before AI becomes deeply embedded in healthcare systems.

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *