Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns
New AI-assisted tools being developed at Florida International University in Miami can assist crisis-response agencies, social media platforms, researchers, educators, and everyday media consumers by flagging potential disinformation early, decoding storytelling and analysis of timelines, cultural signals and organized large-scale content before it becomes weaponized.
The Conversation
It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it’s a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs.
This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election.
While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defenses against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content.
ADDITIONAL NEWS FROM THE INTEGRITY PROJECT