Meta, owner of the largest social network in use today, explains what it means to combat disinformation related to the upcoming EU parliamentary elections, and in particular We explained how we plan to handle generated content. .
Risks related to disinformation
Earlier this year, the World Economic Forum (WEF) named misinformation and disinformation at the top of its list of the most pressing risks facing the world in the coming years.
This assessment is influenced by the fact that large-scale language models (LLMs) are increasingly being used to fuel disinformation through AI-generated images, videos, and text.
Another factor is that more than 50 countries around the world will hold national elections in 2024, and AI-powered disinformation is expected to be widely used to change public opinion and disrupt electoral processes. It’s a fact.
There is no one or easy solution to this problem, especially since some solutions can be inappropriately weaponized. “As authorities seek to crack down on the spread of false information, there are risks not only of inaction, but also of repression and rights violations,” the WEF said in its report. Global Risk Report 2024.
False information on social media
In September 2023, after sharing the results of a report on online disinformation and manipulation by major online platforms, European Commission Vice-President Vera Yurova said: “Future national and EU elections will be a It will be an important test.” [Code of Practice on Disinformation] Platform signers must not fail. ”
“Platforms need to take their responsibilities seriously, especially in light of the Digital Services Act, which requires them to reduce the risks they pose to elections,” she added.
While many platforms have long published reports on their efforts to curb influence operations, disinformation, and misleading content, it is becoming clear that they need to step up their efforts.
Meta plans to curb misinformation and process AI-generated content and ads
Meta, the owner of Facebook, Instagram, WhatsApp and the newly founded Threads, is preparing for the EU parliamentary elections in the following ways.
- Establish a dedicated election operations center (staffed with intelligence, investigative, legal, and other experts to identify and mitigate potential threats in real time)
- Expanding our network of fact-checking organizations across the EU (covering content in over 26 languages)
- Identifying, labeling, removing, or downgrading AI-generated content that is intended to deceive
“We remove the most serious types of misinformation from Facebook, Instagram, and threads, including content that could lead to imminent violence or physical harm or content aimed at suppressing votes.” Meta Marco Pancini, head of EU affairs at
Content found to not violate these specific policies will be marked with a warning label and its distribution will be reduced. “95% of people won’t click through to see a post if it has a fact-checked label on it,” Pancini added, adding that the company also won’t allow ads with debunked content. Ta.
If news from state media is part of a deceptive campaign or coordinated influence operation, it will be labeled as such and the post will be demoted.
Finally, AI-generated content is also reviewed by fact-checkers, labeled appropriately, removed/banned, and down-ranked. (Posters are also available) and needed to label AI-generated video or audio).
“We are already labeling photorealistic images created using Meta AI, and AI-generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users can use on Facebook, Instagram, etc. , we are building a tool to label images posted to Threads,” Pancini explained.
“As AI-generated content emerges on the internet, we have been working with other companies in our industry on common standards and guidelines. It will require a huge effort across governments and civil society.”
Mehta previously shared similar plans for the 2024 election, including blocking new political ads in the final weeks of the US election campaign.
Anti-disinformation plans for other social networks
TikTok also recently revealed plans to counter disinformation aimed at interfering in the 2024 European elections.
“Next month, we will be launching in-app election centers in local languages for each of the 27 EU member countries to help people easily distinguish fact from fiction. Local electoral commissions and civil society organizations By working with the company, these election centers will be a place where the community can find trustworthy and reliable information,” said Kevin Morgan, the company’s EMEA head of safety and integrity.
Similar to Meta, they partner with a number of fact-checking organizations covering content in a variety of European languages and work to counter misinformation. They also plan to invest in media literacy campaigns on their platforms and work to detect and disrupt deceptive actors operating during elections.
Last September, Jourová pointed out that Elon Musk-owned X (formerly Twitter) was “the platform with the most misinformation and disinformation.”
Although X/Twitter is no longer a signatory to the Anti-Disinformation Code, we are required to comply with EU Digital Services Law and have agreed to do so.
“With Operation Texont and other well-crafted disinformation campaigns, and the widespread availability of AI models capable of creating highly convincing audio, video, and image content, we are already seeing what will happen. “It’s a bitter taste,” said Tony Anscombe, Chief Security Evangelist at ESET, commenting on the Meta announcement.
“Social media platforms of all kinds, especially Facebook, We welcome this initiative and look forward to hearing more about how it will work in practice soon.”