OpenAI’s Sora: When Seeing Should Not Be Believing
Sora 2, the new text-to-video tool from leading artificial intelligence firm OpenAI, generates convincing videos advancing false claims 80 percent of the time when prompted to do so, recent NewsGuard analysis found.
NewsGuard
OpenAI’s new text-to-video AI generator Sora 2 produced realistic videos advancing provably false claims 80 percent of the time (16 out of 20) when prompted to do so, a NewsGuard analysis found, demonstrating the ease with which bad actors can weaponize the powerful new technology to spread false information at scale. Five of the 20 false claims spread by Sora originated with Russian disinformation operations.
OpenAI, which also operates text-generating chatbot ChatGPT, released Sora 2 as a free application for iPhones and other iOS devices on Sept. 30, 2025, generating 1 million downloads in just five days. The tool’s ability to produce convincing videos, including apparent news reports, has already generated concerns about the spread of deepfakes. OpenAI itself has acknowledged the danger, stating in a document accompanying Sora’s release, “Sora 2’s advanced capabilities require consideration of new potential risks, including nonconsensual use of likeness or misleading generations.”
ADDITIONAL NEWS FROM THE INTEGRITY PROJECT