- Meta’s oversight committee is investigating two cases involving AI-generated images of female public figures.
- In one incident, a pornographic deepfake of an American woman was made on Facebook.
- The board will consider Meta’s policy on how to handle explicit AI images.
Meta’s oversight board announced Tuesday that it will investigate the company’s policies regarding explicit AI deepfakes of women.
The tech giant’s board said it was investigating two specific cases involving Instagram and Facebook.
One of the incidents included an AI-generated image resembling a nude American public figure and a man molesting her. The person’s name was included in the caption, which was posted to the AI Works Facebook group.
Meta’s oversight committee did not reveal which female public figure was used in the AI deepfake.
The oversight committee said that before the AI-generated nude image was shared in the Facebook group, another user had already posted it. The explicit photo was removed for violating its bullying and harassment policy against “derogatory sexual photoshops or drawings.”
The user who posted the photo requested its removal, but the automated system denied it. The user then appealed to the board.
In January, a porn deepfake of Taylor Swift on X caused an uproar on the internet. Many of the images showed the pop star engaging in sex acts at a soccer stadium.
One post, uploaded by a verified user on X (formerly Twitter), garnered over 45 million views before being removed by moderators approximately 17 hours later.
Another investigation involves AI-generated images of nude women resembling Indian public figures. The content was posted on an Instagram account that only shares AI-generated images of Indian women.
In this case, the meta failed to remove the content after being reported twice. The user appealed to the board, which determined that Meta’s decision to retain the content was a mistake. The post was later removed for violating community standards against bullying and harassment.
The board said it selected these cases to see if Meta was effectively addressing explicit images generated by AI.
Politicians, celebrities, and business leaders are speaking out about deepfakes and the risks they pose.
White House press secretary Karine Jean-Pierre previously said that lax enforcement of deepfakes disproportionately affects women as well as girls, who are “by far the target.”
Jean-Pierre also said that while the law should play a role in addressing the issue, social media platforms should also ban harmful AI content on their own.
Microsoft CEO Satya Nadella said of nude deepfakes: Taylor Swift: ‘Alarming and terrible’ and asked for further protection from these images.
Just forget about porn. Deepfakes could also be used to influence elections. Already, a phone call that faked an AI-generated message from President Joe Biden was revealed during the primary election.
Meta’s oversight committee would like to hear from the public about the two incidents. We asked for comments suggesting strategies on how to address this issue, as well as feedback on its severity.
The oversight committee will spend the next few weeks deliberating its decision before publishing its results, the statement said. The board’s recommendations are not binding, but Meta must respond to them within 60 days.