Date: April 17, 2024
Meta’s oversight board has been actively investigating the actions taken on user-raised flags and reports against AI-generated nudity.
The risks of AI are deepening with every advancement, and the rise of deepfake technology is making things even more dramatic. The recent controversial posts revolving around AI-generated deep fakes of two renowned celebrities have sparked skepticism against the positive impact of AI on society at large.
Both the videos went viral on Instagram and Facebook. The fact that they had enough time to reach millions of users raises big questions about Meta’s oversight policies and action readiness. To tackle this challenge, Meta’s oversight board is actively investigating both cases to understand the root causes and lack thereof, wherever identifiable. The board is also inviting comments from the public regarding their views on Meta’s action against the deepfake videos.
Deepfakes are becoming a much larger problem in India than anyone anticipated. With more talent than opportunities, the mind becomes prone to negative creativity. With generative AI supporting their creative dark minds, various fake images and videos of women depicting nudity have surfaced on Instagram and Facebook.
> Nature of images - An image depicting a nude woman resembling a public figure in India was posted on an Instagram page with other actresses’ deepfake images.
> Meta’s response - While an American celebrity’s deepfake image was removed from the platform in a day, the Indian actress’ image remained online even after multiple flags and reports by users.
> Future Vision - Meta has agreed to comply with the oversight board’s decision after a thorough investigation. The action plans will depend on the board’s decision in consideration of the public’s public opinion.
The oversight board has reported a rather chaotic process of investigating and acting on reported posts on Facebook and Instagram. “In this case, a user reported the content to Meta for pornography. This report was automatically closed because it was not reviewed within 48 hours. The same user then appealed Meta’s decision to leave up the content but this was also automatically closed and so the content remained up. The user then appealed to the Board. As a result of the Board selecting this case, Meta determined that its decision to leave the content up was in error and removed the post for violating the Bullying and Harassment Community Standard,” said the oversight board.
The recent controversies have raised alarming concerns about AI’s safety. Major tech giants, including Elon Musk, have demanded a complete pause on AI development until adequate measures to monitor, control, and criminalize illegal activities are in place.
Pinterest Follows Amazon in Layoffs Trend, Shares Fall by 9%
AI-driven restructuring fuels Pinterest layoffs, mirroring Amazon’s strategy, as investors react sharply and question short-term growth and advertising momentum.
Clawdbot Rebrands to "Moltbot" After Anthropic Trademark Pressure: The Viral AI Agent That’s Selling Mac Minis
Clawdbot is now Moltbot. The open-source AI agent was renamed after Anthropic cited trademark concerns regarding its similarity to their Claude models.
Amazon Bungles 'Project Dawn' Layoff Launch With Premature Internal Email Leak
"Project Dawn" leaks trigger widespread panic as an accidental email leaves thousands of Amazon employees bracing for a corporate cull.
OpenAI Launches Prism, an AI-Native Workspace to Shake Up Scientific Research
Prism transforms the scientific workflow by automating LaTeX, citing literature, and turning raw research into publication-ready papers with GPT-5.2 precision.