A user interacts with Anthropic’s Claude AI, now featuring selective past chat recall and a 1M-token context window for handling longer, more complex prompts. Image Source: ChatGPT-5

Claude Gains Selective Memory and 1M-Token Context Window

Key Takeaways:

  • Anthropic has added a selective memory feature to Claude, letting it reference past chats only when prompted.

  • The memory is workspace- and project-specific, with an option to disable it entirely for privacy.

  • Claude Sonnet 4 now supports a 1 million token context window, or roughly 750,000 words / 75,000 lines of code.

  • The new context size is over five times larger than Claude’s previous limit and more than double GPT-5’s.

  • Max, Team, and Enterprise subscribers get the memory feature now; the context upgrade is available via API and cloud partners.


Two Major Claude Updates Target Developers and Enterprises

As competition in the AI space accelerates, Anthropic is adding new capabilities to its Claude platform designed to make it more powerful and adaptable for enterprise use. The company is rolling out selective chat memory and a dramatically expanded context window, updates aimed at improving both day-to-day usability and the technical performance demanded by developers.

Selective Memory for Past Chats

Claude can now reference previous conversations when explicitly asked, making it easier to pick up paused projects or revisit research without re-explaining prior details. The feature works only within the current workspace and project, ensuring that unrelated conversations remain inaccessible.

Anthropic’s implementation is narrower in scope than competing memory systems. For example, OpenAI’s ChatGPT stores all past conversations by default and uses them to personalize responses to any new prompt, while Google Gemini can automatically recall past chats and has experimented with drawing on Google Search history to customize answers.

By contrast, Claude’s memory acts like an on-demand search of past conversations rather than a persistent profile. This means the chatbot will never reference previous exchanges unless prompted, and users can disable the capability entirely via a settings toggle. This design makes the system more privacy-minded by default, appealing to organizations that require tighter control over stored conversational data.

The rollout is beginning with Max, Team, and Enterprise subscribers, with plans to expand to other tiers in the near future.

1 Million Token Context Window

For API customers, Claude Sonnet 4 now supports a 1 million token context window, enabling it to process the equivalent of 750,000 words, more than the entire “Lord of the Rings” trilogy, or 75,000 lines of code in a single request. This represents a leap from Claude’s previous 200,000 token capacity and is more than double the 400,000 token limit offered by OpenAI’s GPT-5.

The feature is also available through Amazon Bedrock and Google Cloud’s Vertex AI, extending access across Anthropic’s cloud partner network. Anthropic has built a strong enterprise customer base—particularly in AI coding platforms like GitHub Copilot, Windsurf, and Anysphere’s Cursor—and the expanded context window is aimed squarely at those users.

According to Brad Abrams, Anthropic’s product lead for Claude, the larger window is especially beneficial for long agentic coding tasks, where the AI must remember and build on earlier steps over minutes or hours of work. While rivals like Google Gemini 2.5 Pro (2 million tokens) and Meta Llama 4 Scout (10 million tokens) offer higher theoretical limits, Abrams said Anthropic focused on increasing the effective context window—how much the model can meaningfully use and understand—rather than just the raw number.

For prompts exceeding 200,000 tokens, API pricing rises to $6 per million input tokens and $22.50 per million output tokens, up from $3 and $15 respectively.

Q&A: Claude’s New Capabilities

Q: What is Claude’s new selective memory?
A: A feature that lets Claude reference past chats only when asked, limited to the same workspace and project.

Q: Can I disable Claude’s memory feature?
A: Yes. The feature is off by default for unrelated chats and can be turned off entirely via settings.

Q: How big is Claude’s new context window?
A: 1 million tokens, or about 750,000 words, enabling far longer and more complex prompts.

Q: How does Claude’s context size compare to GPT-5?
A: Claude’s is more than double GPT-5’s 400,000 tokens, but smaller than Google Gemini’s 2M and Meta’s 10M.

Q: Who gets these updates first?
A: Max, Team, and Enterprise subscribers get the memory feature now; API customers and cloud partners get the expanded context window immediately.

What This Means

Anthropic’s latest updates make Claude more context-aware, developer-friendly, and privacy-conscious, addressing two major needs for enterprise users. The selective memory feature removes the burden of re-explaining details while maintaining user control over data recall, a key differentiator from competitors that default to full-session retention.

The 1M-token context window gives developers and coding platforms room to tackle entire codebases, multi-document research, or lengthy agentic tasks without breaking prompts into smaller segments. This improvement not only speeds up workflows but also allows for more coherent, accurate outputs in complex projects.

By pairing controlled memory with massive input capacity, Anthropic is signaling a strategy focused on practical performance gains over headline-grabbing token counts. In a competitive AI market, these updates position Claude as a reliable tool for organizations that value both scalability and privacy safeguards in their AI systems.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.

Keep Reading

No posts found