Date: April 17, 2024
Meta’s oversight board has been actively investigating the actions taken on user-raised flags and reports against AI-generated nudity.
The risks of AI are deepening with every advancement, and the rise of deepfake technology is making things even more dramatic. The recent controversial posts revolving around AI-generated deep fakes of two renowned celebrities have sparked skepticism against the positive impact of AI on society at large.
Both the videos went viral on Instagram and Facebook. The fact that they had enough time to reach millions of users raises big questions about Meta’s oversight policies and action readiness. To tackle this challenge, Meta’s oversight board is actively investigating both cases to understand the root causes and lack thereof, wherever identifiable. The board is also inviting comments from the public regarding their views on Meta’s action against the deepfake videos.
Deepfakes are becoming a much larger problem in India than anyone anticipated. With more talent than opportunities, the mind becomes prone to negative creativity. With generative AI supporting their creative dark minds, various fake images and videos of women depicting nudity have surfaced on Instagram and Facebook.
> Nature of images - An image depicting a nude woman resembling a public figure in India was posted on an Instagram page with other actresses’ deepfake images.
> Meta’s response - While an American celebrity’s deepfake image was removed from the platform in a day, the Indian actress’ image remained online even after multiple flags and reports by users.
> Future Vision - Meta has agreed to comply with the oversight board’s decision after a thorough investigation. The action plans will depend on the board’s decision in consideration of the public’s public opinion.
The oversight board has reported a rather chaotic process of investigating and acting on reported posts on Facebook and Instagram. “In this case, a user reported the content to Meta for pornography. This report was automatically closed because it was not reviewed within 48 hours. The same user then appealed Meta’s decision to leave up the content but this was also automatically closed and so the content remained up. The user then appealed to the Board. As a result of the Board selecting this case, Meta determined that its decision to leave the content up was in error and removed the post for violating the Bullying and Harassment Community Standard,” said the oversight board.
The recent controversies have raised alarming concerns about AI’s safety. Major tech giants, including Elon Musk, have demanded a complete pause on AI development until adequate measures to monitor, control, and criminalize illegal activities are in place.
OpenAI Is Building an Audio-First AI Model And It Wants to Put It in Your Pocket
New real-time audio model targeted for Q1 2026 alongside consumer device ambitions.
Nvidia in Advanced Talks to Acquire Israel's AI21 Labs for Up to $3 Billion
Deal would mark chipmaker's fourth major Israeli acquisition and signal shifting dynamics in enterprise AI.
Nvidia Finalizes $5 Billion Stake in Intel after FTC approval
The deal marks a significant lifeline for Intel and signals a new era of collaboration between two of America's most powerful chipmakers.
Manus Changed How AI Agents Work. Now It's Coming to 3 Billion Meta Users
The social media giant's purchase of the Singapore-based firm marks its third-largest acquisition ever, as the race for AI dominance intensifies.