Date: February 07, 2024
YouTube has silently upgraded a policy in June that allows users to request AI content removal of simulated voices or faces.
The world is still absorbing the overwhelming Generative AI developments that big tech is achieving at a rapid pace. Along with the skepticism of whether it is for our own good or not, the factual concerns have grown manifold in the last year. Generative AI can reproduce life-like imitations of people, which has caused a surge in non consensual deepfake videos. These videos can easily harm the reputation, stature, and self-confidence of an individual to dangerous levels.
Global regulators have finally woken up and are keeping a keen eye on every development in the AI landscape. New digital laws and compliance restrictions have ascertained human identity safeguarding to some extent. Aligned with the regulators’ efforts, YouTube has silently updated its content policies that help mitigate the deepfake AI videos on its platform.
The new policy allows users to request content removal of videos and images that are AI-generated and have used their voice, face, or other recognizable features. This move marks a significant step by the tech giant towards ensuring consensual content, whether it is human-produced or generated by AI. YouTube has formed an official responsible AI agenda first introduced in November, which includes such policies and other functionalities across services.
According to the recently updated help documentation on the topic, the platform requires first-party claims outside of a few exceptions. The exceptions include cases like that of a minor who doesn’t have access to a computer or is deceased.
Submission of a request does not ensure the removal of content, but it starts a thorough investigation on a fast-track basis. The owner of the content’s channel has 48 hours to respond to the complaint with either a justification or an action. Afterward, the platform either removes the content based on its judgment or due to the creator's inaction.
Synthetic media, including generative AI, requires prior labeling by the uploader about the nature and intent of the video. Additional crowdsourced notes are also being tested on the platform to provide more context whether the video is a satire or misleading in nature. With this, other tech giants like Meta will also get a reference to empower AI-generated content distribution under ethical standards.
By Arpit Dubey
Arpit is a dreamer, wanderer, and tech nerd who loves to jot down tech musings and updates. With a knack for crafting compelling narratives, Arpit has a sharp specialization in everything: from Predictive Analytics to Game Development, along with artificial intelligence (AI), Cloud Computing, IoT, and let’s not forget SaaS, healthcare, and more. Arpit crafts content that’s as strategic as it is compelling. With a Logician's mind, he is always chasing sunrises and tech advancements while secretly preparing for the robot uprising.
OpenAI Is Building an Audio-First AI Model And It Wants to Put It in Your Pocket
New real-time audio model targeted for Q1 2026 alongside consumer device ambitions.
Nvidia in Advanced Talks to Acquire Israel's AI21 Labs for Up to $3 Billion
Deal would mark chipmaker's fourth major Israeli acquisition and signal shifting dynamics in enterprise AI.
Nvidia Finalizes $5 Billion Stake in Intel after FTC approval
The deal marks a significant lifeline for Intel and signals a new era of collaboration between two of America's most powerful chipmakers.
Manus Changed How AI Agents Work. Now It's Coming to 3 Billion Meta Users
The social media giant's purchase of the Singapore-based firm marks its third-largest acquisition ever, as the race for AI dominance intensifies.