Metaplatforms’ independent oversight board is currently reviewing the company’s response to two AI-generated sexually explicit images of female celebrities shared on Facebook and Instagram. These images, which have not been named to avoid further harm, will serve as a case study for the commission to evaluate Meta’s policies and enforcement strategies regarding pornographic deepfakes.
Advances in AI technology are raising concerns about sexually explicit fakes targeting women and girls. These are becoming increasingly difficult to distinguish from genuine content. The issue received significant attention when Elon Musk-owned platform X blocked all images of Taylor Swift following a proliferation of fake and explicit content from the pop star. Several actresses, actors, and even athletes have fallen victim to deepfakes in India.
Key takeaways from the Oversight Board’s current review include:
Image nature: One image shared on Instagram showed a naked woman resembling an Indian celebrity. Another photo, posted on a Facebook group, showed a naked woman resembling an American celebrity in a sexually dangerous pose.Meta actions: The image of the American woman was removed for violating Mehta’s harassment policy, but the image of the Indian woman remained until the board intervened.Future initiatives: Mehta expressed its commitment to abide by the Supervisory Board’s decisions regarding these cases. The ongoing deepfake crisis has led to calls for legislation to criminalize the creation of harmful deepfakes and for technology companies to proactively prevent such misuse of their platforms.
With input from agency