Medical misinformation more likely to fool AI if source appears legitimate, study shows
Reuters
Artificial intelligence tools are more likely to provide incorrect medical advice when the misinformation comes from what the software considers to be an authoritative source, a new study found.
In tests of 20 open-source and proprietary large language models, the software was more often tricked by mistakes in realistic-looking doctors' discharge notes than by mistakes in social media conversations, researchers reported in The Lancet Digital Health.
"Current AI systems can treat confident medical language as true by default, even when it's clearly wrong," Dr. Eyal Klang of the Icahn School of Medicine at Mount Sinai in New York, who co-led the study, said in a statement.
"For these models, what matters is less whether a claim is correct than how it is written."
The accuracy of AI is posing special challenges in medicine.
READ THE FULL REPORT IN THE LANCET
ADDITIONAL NEWS FROM THE INTEGRITY PROJECT