Illustration by Alex Castro / The Verge
Meta will start labelling AI-generated photos uploaded to Facebook, Instagram, and Threads over the coming months as election season ramps up around the world. The company will also begin punishing users who don’t disclose if a realistic video or piece of audio was made with AI.
Nick Clegg, Meta’s president of global affairs, said in an interview that these steps are meant to “galvanize” the tech industry as AI-generated media becomes increasingly difficult to discern from reality. The White House has pushed hard for companies to watermark AI-generated content. In the meantime, Meta is building tools to detect synthetic media even if its metadata has been altered to obfuscate AI’s role in its creation, according to Clegg.