Meta Detects AI-Generated Deceptive Content Aimed at U.S. and Canadian Audiences. In its latest security report, Meta disclosed the detection of deceptive “likely AI-generated” content on Facebook and Instagram.
The content, including comments praising Israel’s actions in Gaza, appeared under posts from news outlets and U.S. lawmakers. The accounts, identified as posing as various demographic groups, including Jewish students and African Americans, were linked to a Tel Aviv-based firm, STOIC.
Meta’s report marks its first identification of sophisticated generative AI technologies in influence operations since their emergence in late 2022. The potential for AI to fuel more potent disinformation campaigns raises concerns among researchers about its impacts on democratic processes.
During a press briefing, Meta’s security chiefs expressed confidence in their ongoing capability to counter AI-assisted influence efforts. They have not yet encountered AI-generated images of politicians that could pass for real.
“AI tools may speed up content creation and increase its volume, yet they haven’t hindered our detection capabilities,” stated Mike Dvilyanski, Meta’s head of threat investigations.
The report revealed six covert influence operations thwarted by Meta in the first quarter. While the STOIC network used AI, an Iran-based network targeting the Israel-Hamas conflict did not employ such technology.
Tech giants like Meta are continuously adjusting to the challenges posed by new AI technologies, particularly around elections. Although digital labelling systems are in place for AI-generated content, their effectiveness, especially on text, remains questionable.
Meta is preparing to defend against potential AI-driven misinformation in upcoming elections in the European Union and the United States later this year.