Meta To Train Its AI On Any Image Or Video Taken By Users
Date: October 03, 2024
Meta recently revealed that it may use user-captured images and videos on Ray-Ban Meta smart glasses to train its AI models.
One of the major concerns revolving around AI is the usage of end-consumer data for training Language Models. Most big tech players have diplomatic policies that focus on using user-generated images and videos to train their AI models. Meta has shared clear communication to train its AI without using user-generated content.
However, in recent updates to its policies, Meta has changed the privacy and content usage statement regarding AI model training. Meta’s Communications Manager, Emil Vazquez, told a tech media house over Email, “In locations where multimodal AI is available (currently US and Canada), images and videos shared with Meta AI may be used to improve it per our Privacy Policy.”
This means that the latest flagship AI product, Meta Ray-Ban smart glasses, will use the images and videos uploaded by users with a prompt to analyze them. The images and videos can be of anything. It also clarified that users who do not submit their content to AI can be assured that their content isn’t used for training purposes. Once you ask Meta to analyze something you see from the glasses, it will automatically get permission to use the visual, audio, and textual content to train its AI models.
The company is indirectly using consumer data to create the biggest stockpile of data to train its AI models. What users want to know is the segregation of Meta’s training policies for personal and sensitive images or videos. This includes images of people’s homes, people, and other inside information.
One instance is when a user asks Meta Ray-Ban smart glasses to analyze which glass the tabletop is made of. If a credit card is kept on the table, will it be read by Meta AI and sent for training purposes? Or will Meta only send the parts of the image that are relevant to the analysis prompt? Further clarity is expected from Meta’s representatives to help its users get a stark understanding of the privacy policies of AI chatbots.
By Arpit Dubey
Arpit is a dreamer, wanderer, and tech nerd who loves to jot down tech musings and updates. With a knack for crafting compelling narratives, Arpit has a sharp specialization in everything: from Predictive Analytics to Game Development, along with artificial intelligence (AI), Cloud Computing, IoT, and let’s not forget SaaS, healthcare, and more. Arpit crafts content that’s as strategic as it is compelling. With a Logician's mind, he is always chasing sunrises and tech advancements while secretly preparing for the robot uprising.
// Recommended
Pinterest Follows Amazon in Layoffs Trend, Shares Fall by 9%
AI-driven restructuring fuels Pinterest layoffs, mirroring Amazon’s strategy, as investors react sharply and question short-term growth and advertising momentum.
Clawdbot Rebrands to "Moltbot" After Anthropic Trademark Pressure: The Viral AI Agent That’s Selling Mac Minis
Clawdbot is now Moltbot. The open-source AI agent was renamed after Anthropic cited trademark concerns regarding its similarity to their Claude models.
Amazon Bungles 'Project Dawn' Layoff Launch With Premature Internal Email Leak
"Project Dawn" leaks trigger widespread panic as an accidental email leaves thousands of Amazon employees bracing for a corporate cull.
OpenAI Launches Prism, an AI-Native Workspace to Shake Up Scientific Research
Prism transforms the scientific workflow by automating LaTeX, citing literature, and turning raw research into publication-ready papers with GPT-5.2 precision.
Have newsworthy information in tech we can share with our community?
