Meta’s Oversight Board Actively Reviewing AI-Nudity Reports

Meta’s Oversight Board Actively Reviewing AI-Nudity Reports

Date: April 17, 2024

Meta’s oversight board has been actively investigating the actions taken on user-raised flags and reports against AI-generated nudity.

The risks of AI are deepening with every advancement, and the rise of deepfake technology is making things even more dramatic. The recent controversial posts revolving around AI-generated deep fakes of two renowned celebrities have sparked skepticism against the positive impact of AI on society at large.

Both the videos went viral on Instagram and Facebook. The fact that they had enough time to reach millions of users raises big questions about Meta’s oversight policies and action readiness. To tackle this challenge, Meta’s oversight board is actively investigating both cases to understand the root causes and lack thereof, wherever identifiable. The board is also inviting comments from the public regarding their views on Meta’s action against the deepfake videos.

Deepfakes are becoming a much larger problem in India than anyone anticipated. With more talent than opportunities, the mind becomes prone to negative creativity. With generative AI supporting their creative dark minds, various fake images and videos of women depicting nudity have surfaced on Instagram and Facebook.

The key points getting reviewed by Meta’s oversight board include:

> Nature of images - An image depicting a nude woman resembling a public figure in India was posted on an Instagram page with other actresses’ deepfake images.
> Meta’s response - While an American celebrity’s deepfake image was removed from the platform in a day, the Indian actress’ image remained online even after multiple flags and reports by users.
> Future Vision - Meta has agreed to comply with the oversight board’s decision after a thorough investigation. The action plans will depend on the board’s decision in consideration of the public’s public opinion.

The oversight board has reported a rather chaotic process of investigating and acting on reported posts on Facebook and Instagram. “In this case, a user reported the content to Meta for pornography. This report was automatically closed because it was not reviewed within 48 hours. The same user then appealed Meta’s decision to leave up the content but this was also automatically closed and so the content remained up. The user then appealed to the Board. As a result of the Board selecting this case, Meta determined that its decision to leave the content up was in error and removed the post for violating the Bullying and Harassment Community Standard,” said the oversight board.

The recent controversies have raised alarming concerns about AI’s safety. Major tech giants, including Elon Musk, have demanded a complete pause on AI development until adequate measures to monitor, control, and criminalize illegal activities are in place.

Guest Author

By Guest Author LinkedIn Icon

Have newsworthy information in tech we can share with our community?

Post Project Image