#News

Anthropic Requires Claude Users to Opt Out of AI Training by Sep 28

Anthropic Requires Claude Users to Opt Out of AI Training by Sep 28

Claude maker shifts from a privacy-first stance to an opt-out model, extending data retention from 30 days to five years for those who participate.

Anthropic is implementing a significant shift in its data privacy approach, requiring millions of Claude users to decide by September 28, 2025, whether they want their conversations used to train future AI models – a stark departure from the company's previous policy of not using consumer chat data for model training at all.

The New Reality for Claude Users

The changes affect users across Claude Free, Pro, and Max tiers, including those using Claude Code, fundamentally altering how the AI assistant handles user data. Previously, Anthropic didn't use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations and coding sessions, and it said it's extending data retention to five years for those who don't opt out.

This represents a dramatic shift from Anthropic's earlier stance. Previously, users of Anthropic's consumer products were told that their prompts and conversation outputs would be automatically deleted from Anthropic's back end within 30 days "unless legally or policy‑required to keep them longer" or their input was flagged as violating its policies.

Business customers using Claude Gov, Claude for Work, Claude for Education, or API access remain unaffected by these changes, maintaining the privacy protections that enterprise users expect.

The Opt-Out Design Controversy

Privacy advocates are raising concerns about how Anthropic is implementing these changes. New users will choose their preference during signup, but existing users face a pop-up with "Updates to Consumer Terms and Policies" in large text and a prominent black "Accept" button with a much tinier toggle switch for training permissions below in smaller print — and automatically set to "On."

This design approach has drawn criticism from observers who worry that users might inadvertently agree to data sharing without realizing it. The Verge noted that the design raises concerns that users might quickly click "Accept" without noticing they're agreeing to data sharing.

Anthropic's Justification

In its official announcement, Anthropic frames the changes around user improvement and safety. "By participating, you'll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. You'll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users," the company stated.

The company also explained the technical rationale behind the extended retention period: "AI development cycles span years—models released today began development 18 to 24 months ago. Keeping data consistent across the training process helps make the models more consistent, too: models trained on similar data will respond, reason, and produce outputs in similar ways, making the changes between model upgrades much smoother for users."

Arpit Dubey

By Arpit Dubey

Have newsworthy information in tech we can share with our community?

Post Project Image

Fill in the details, and our team will get back to you soon.

Contact Information
+ * =