Anthropic’s new privacy update screen asks Claude users to decide whether to share their conversations for AI training, with data retention extended to five years for those who opt in. Image Source: Anthropic

Anthropic Privacy Update: Claude Users Face 5-Year Data Retention Choice

Key Takeaways: Anthropic Privacy Policy Updates and Data Retention Changes

  • Anthropic is updating its Consumer Terms and Privacy Policy for Claude Free, Pro, and Max users.

  • Users must decide by September 28, 2025 whether to allow their conversations and coding sessions to be used for AI training.

  • If users opt in, data will be stored for up to five years to support model training and safety improvements.

  • Deleted conversations will not be used for future Claude training, even if data sharing is enabled.

  • The changes do not apply to enterprise services like Claude for Work, Claude Gov, Claude for Education, or API access.

  • TechCrunch notes concerns over design choices in Anthropic’s rollout, which may nudge users toward sharing data.


Anthropic’s New Policy Rollout: Data Sharing and Retention Choices for Claude Users

Anthropic announced changes to its Consumer Terms and Privacy Policy, giving Claude users control over whether their data is used to improve the company’s AI models. The updates affect all Claude Free, Pro, and Max plans, including use of Claude Code, but do not extend to enterprise (Claude for Work, Claude Gov, Claude for Education) or API-based services.

Starting this week, notifications will appear for existing users, who must accept the updated terms by September 28, 2025. New users will choose their preference during signup. Anthropic said participation will help “improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations,” while also enhancing Claude’s capabilities in “coding, analysis, and reasoning.”

For users who opt in, data from new or resumed chats and coding sessions will be retained for five years to support training and safety improvements. Importantly, if a user deletes a conversation with Claude, it will not be used for future model training. Those who decline will remain under Anthropic’s current 30-day retention policy. Feedback submitted about Claude’s responses will also fall under the new five-year retention rule if training is enabled.

Anthropic emphasized that users remain “always in control” of their settings and can change them at any time through Privacy Settings. The company also stressed that it does not sell user data and applies tools to filter or obscure sensitive information.

TechCrunch: Competitive Pressures and User Consent in AI Data Policies

While Anthropic framed the update as a move toward user choice, TechCrunch highlighted that it reflects broader industry pressures. Training large language models requires vast amounts of conversational data, and access to millions of Claude interactions could strengthen Anthropic’s position against competitors like OpenAI and Google. By contrast, xAI’s Grok draws on public X posts for training, underscoring how companies are tapping very different sources to fuel their models.

The shift also underscores growing scrutiny of data retention policies across the AI sector. OpenAI, for example, is currently fighting a court order requiring it to indefinitely retain all consumer ChatGPT conversations due to ongoing litigation. In June, OpenAI COO Brad Lightcap called this “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.”

TechCrunch also raised concerns about how Anthropic is presenting the policy changes. Existing users will encounter a pop-up with a prominent “Accept” button, while the option to adjust training permissions appears in much smaller text — and is automatically toggled on. The Verge, cited by TechCrunch, noted this design could lead users to consent without fully realizing it.

Privacy experts warn that such design choices contribute to a lack of meaningful user consent. The Federal Trade Commission (FTC) under the Biden administration has previously cautioned AI companies against making policy shifts in ways that obscure key details, though it remains unclear how actively the agency is enforcing those warnings today.

Q&A: Anthropic Privacy Policy, Data Retention, and Claude User Options

Q: What is Anthropic changing in its privacy policy?
A: Anthropic is requiring Claude Free, Pro, and Max users to decide whether their chats and coding sessions can be used for AI training.

Q: Who is affected by the new terms?
A: The updates apply to consumer users of Claude Free, Pro, and Max, including Claude Code. They do not affect enterprise services like Claude for Work, Claude Gov, Claude for Education, or API access.

Q: What happens if I opt in to data sharing?
A: Your conversations and coding sessions will be retained for five years and may be used to improve Claude’s safety and reasoning capabilities.

Q: What if I delete a conversation?
A: Deleted chats will not be used for model training, even if you opted in to data sharing.

Q: What if I don’t opt in?
A: Your data will remain under the existing 30-day retention policy, and it will not be used for training future Claude models.

What This Means: Privacy as the Fault Line for Trust in AI Companies

Anthropic’s policy update reflects both the growing demand for training data in the AI industry and the mounting scrutiny around privacy and user consent. While the company is giving users direct control, the presentation of choices raises questions about how meaningful that control will be in practice.

The move also highlights broader competitive dynamics in AI. Companies like Anthropic, OpenAI, and Google rely on real-world conversations to refine their models, while others — such as Grok, developed by xAI — draw on public X posts as training data. Each approach exposes different risks and sensitivities, but all share the same pressure: finding enough quality data to stay competitive.

For consumers, the decision is simple but significant: share data to help improve Claude — or keep conversations private under stricter retention limits. For the industry, it highlights the ongoing tension between rapid innovation and robust privacy safeguards. As AI adoption accelerates, one thing is clear: privacy will remain the fault line where trust in AI companies is won or lost.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.

Keep Reading

No posts found