It’s too easy to make AI chatbots lie about health information, study finds


Researchers in Australia tested whether five popular AI chatbots – GPT-40, Gemini Pro, Llama Vision, Grok and Claude – could be tricked into spreading false health information, and found that the technologies are ‘vulnerable to misuse.’

Reuters
Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned, opens new tab in the Annals of Internal Medicine.

“If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

The team tested widely available models that individuals and businesses can tailor to their own applications with system-level instructions that are not visible to users.

MORE

 

ADDITIONAL NEWS FROM THE INTEGRITY PROJECT