Meta’s oversight committee announced that it will reconsider Meta’s decision to handle two cases in which explicit AI-generated images of female public figures were spread on Facebook and Instagram, respectively. One of the cases was a sexually explicit deepfake of an Indian public figure.
An oversight committee established by Meta will function independently to evaluate Meta’s content review process. The board based its assessment on Meta’s response to a select number of “highly emblematic incidents.” This allows the board to consider whether decisions were made in accordance with Mehta’s stated values and policies.
In a blog post, the board asked whether these incidents “have the potential to impact many users around the world,” are “critical to public debate,” or raise “important questions” about Meta’s policies. He said he was selected based on:
The Board also respects the rights of those depicted in the content and does not recommend naming or sharing personal information of those depicted in the content to reduce the risk of further aggravation of harassment. We asked them to avoid making any assumptions about their identities.
What kind of cases are there?
The first case concerns an account that posted sexually explicit AI-generated images of women on Instagram that were created to resemble Indian celebrities using deepfake technology. They describe how users reported pornographic content to Meta. However, the report was not reviewed within her 48 hours and was automatically closed by Meta’s system. Similarly, user disputes against meta were rejected and the content was not removed even after being reported. This prompted the user to appeal to the board. As a result of the board’s selection of this case, Meta determined that the decision to leave the content up was a mistake and removed the post for violating community standards against bullying and harassment.
The second incident involved sexually explicit images posted on Facebook using AI to resemble American celebrities. The matter was escalated to Meta’s policy or subject matter experts and removed based on violation of Meta’s bullying and harassment policy, specifically against “derogatory sexual photoshops and drawings.” This image has also been added to “Meta’s Media Matching Service Bank”. This is an “automated enforcement system” that automatically finds and removes images that have already been identified by human reviewers as violating Meta’s policies.
Why were these cases chosen?
The board said these cases were specifically chosen “to assess whether Meta’s policies and its enforcement practices are effective in addressing explicit AI-generated images.”
In the case of the first case, the commission said it would rely on an “automated system that automatically closes the challenge within 48 hours if it is not reviewed” for the spread of sexually explicit images generated by AI. The purpose was to examine whether it is effective in combating.
In the second case, TechCrunch asked the board why it chose a case where Meta successfully removed images, but the board said it chose a case that was “emblematic of broader issues across Meta’s platform.” I answered that. – Chair Helle Thorning-Schmidt said in her statement to TechCrunch: We want to examine whether Mehta protects all women around the world in an equitable manner by highlighting the cases of the United States and India. ”
The blog post said the lawsuit aims to analyze the “nature and severity of harm” that AI-generated explicit content causes to women, particularly public figures. The board also aims to assess Meta’s enforcement of the “derogatory sexual photoshops and drawings” provision in its bullying and harassment policy and its use of the Media Matching Service Bank.
The board has solicited strategy and public comments by April 30 from those who can contribute “valuable perspectives.” It has the authority to make policy recommendations to Meta. These recommendations are non-binding and Meta is obligated to respond to these recommendations within her 60 days.
Also read:
Stay up to date with tech news: Get a daily newsletter from MediaNama with the day’s top stories delivered to your inbox before 9am. Click here to register now!