Meta’s quasi-independent oversight board announced plans Tuesday to evaluate the company’s approach to explicit AI-generated images of female public figures on Facebook and Instagram.
Last year, the two social media platforms owned by Mehta, which boast 3 billion and 2 billion users respectively, increased the number of publicly available AI image generators, allowing users to create sexually explicit and provocative images. It’s flooded with AI-generated content. “Nudity” apps allow anyone to generate realistic, sexual images without the knowledge or consent of the person being photographed.
Oversight Committee Co-Chair Helle Thorning Schmidt said in a written statement that the investigation centered on two specific content decisions made by Meta regarding explicit AI images of two anonymous public figures. It said it would investigate whether policies and enforcement practices were illegal. “effective” in addressing this growing problem.
“Deepfake porn is a growing source of gender-based harassment online, and is increasingly being used to target, silence, and intimidate women both online and offline. “Studies have shown that it overwhelmingly targets women,” Thorning-Schmidt’s statement said.
The Oversight Board, known at the time as Facebook’s “Supreme Court,” is funded by Meta and will review content decisions to “ensure that companies align with their policies, values, and commitment to human rights.” It operates almost independently for the purpose of “confirming whether or not people have acted independently.” ‘ is stated on the organization’s website. The board reserves the power to overturn the company’s decisions on content moderation and issue non-binding “advisory opinions” on broader ethical dilemmas.
A representative for the board said in an email that the organization is investigating two public figures – one an Indian woman – whose iconic incident sparked the investigation “to prevent further harm or risk.” , and the other, a woman from the United States, said they had chosen to hide their identities. Harassment based on gender. ”
In the first case, an AI-generated nude image was initially reported to Meta after it was posted on Instagram, but even after the report was automatically closed “because it was not reviewed within 48 hours,” the image were allowed to remain on the platform. It is unclear why this report was not reviewed by Mehta for so long, but the image was eventually removed after the Oversight Committee notified Mehta that it would review this particular decision.
In the second case, after the explicit image was previously removed for violating Facebook’s policies against “bullying and harassment,” specifically regarding “derogatory sexual photoshops and drawings,” the company created a dedicated site for AI-generated content. Posted in Facebook group. The image was removed a second time, and the user who posted it appealed this decision to Meta, ultimately escalating the issue to the Oversight Board and arguing that it should probably remain on the platform. I guess he did.
Over the next two weeks, the Oversight Board will accept public comments on these incidents and the broader landscape of explicit AI images on the Meta platform. After the public comment period ends, the board will ultimately decide on these specific cases and decide whether the posts in question, and perhaps similar ones, “should be allowed on Instagram or Facebook.” Become.
“We know that in some markets and languages, Meta can moderate content more quickly and effectively than others. “We want to consider whether we are protecting all women around the world in an equitable way,” Thorning-Schmidt said.
The board said the organization’s judgment and subsequent recommendations were a case of explicit AI images targeting civilians, who, unlike many public figures, lack the means or public notoriety to combat this type of abuse. There is no mention of whether it is related or not.
Sexual violence facilitated by technology, including the publication of explicit images without consent, has a lasting and devastating impact on many aspects of victims’ lives, including their health, career prospects, personal relationships, and financial well-being. Research shows that it can give. Additionally, many experts and advocates who support victims of image-based sexual abuse, such as disinformation researcher Nina Jankowicz, also point out that many women, especially those living under conservative or paternalistic regimes, For some, the consequences of this kind of reputational damage could even be fatal.
For example, Nighat Dad, a member of the Oversight Committee and a human rights lawyer in Pakistan, recently wrote in a Rolling Stone article that in some parts of the world, AI image-based blackmail and other technology-facilitated He said that forms of abuse have already been caused. To honor murder and suicide.
However, the study’s announcement comes as Meta begins testing a new direct messaging feature aimed at protecting individual users (particularly teenagers) from “sextortion scams” and other forms of image-based abuse. This took place less than a week later.
At the very least, both moves come as Mehta and the Oversight Board are thinking carefully about the company’s long-term strategy for dealing with image-based sexual abuse, which likely befell X after the explicit AI images in January. This suggests that he wants to avoid criticism. In a rush to stop the spread of Taylor Swift, the company forced its platform to temporarily block all searches using her name.