
OpenAI is considering encryption for ChatGPT, starting with temporary chats, as privacy concerns grow. Image Source: ChatGPT-5
Encryption Could Be Coming to ChatGPT, Says Sam Altman
Key Takeaways:
OpenAI is exploring encryption for ChatGPT, likely beginning with temporary chats.
Temporary chats don’t appear in history or train models, but are still subject to a federal court order requiring retention.
Full end-to-end encryption is complex for AI systems because providers like OpenAI act as an endpoint, and features like long-term memory require access to user data.
Altman said users are sharing sensitive medical and legal issues, raising calls for protections similar to doctor-patient or lawyer-client privilege.
Law enforcement requests are currently rare (double digits per year), but Altman predicted one major case could accelerate change.
Encryption on the Table for ChatGPT
Sam Altman, CEO of OpenAI, said the company is “very serious” about bringing encryption to ChatGPT, though there is no timeline yet for launch.
Altman suggested that temporary chats would be the most likely starting point. These chats currently don’t appear in user history and don’t train models, but OpenAI may hold copies for up to 30 days for safety.
Even so, temporary and deleted chats remain subject to a federal court order issued in May, requiring OpenAI to retain their contents. That legal backdrop complicates any push for encryption, since data could still be compelled through the courts.
Altman said people sharing sensitive data has grown in importance as he’s seen how ChatGPT is used in practice:
“People pour their heart out about their most sensitive medical issues or whatever to ChatGPT. It has radicalized me into thinking that AI privilege is a very important thing to pursue.”
Unlike conversations with doctors or lawyers, which are protected by legal confidentiality and privilege, AI chats currently offer no comparable protections.
The Challenge of AI Encryption
Encryption is straightforward in messaging apps, where only endpoints hold the keys. With AI chatbots, however, the provider — in this case, OpenAI — is itself a party to the conversation. That means encrypting data in transit isn’t enough to prevent OpenAI from accessing chat content or sharing it with law enforcement.
This makes true end-to-end encryption difficult to implement. Some companies, such as Apple, have introduced partial solutions — for example, its Private Cloud Compute for Apple Intelligence allows queries to run on Apple servers without broadly exposing data to the company.
For OpenAI, the challenge is even greater. Features like ChatGPT’s long-term memory require the company to retain access to user data. Fully encrypting everything would either disable those features or require re-engineering them, creating tradeoffs between privacy and functionality.
Government Access and Growing Pressure
Altman acknowledged that OpenAI has not yet received a high number of law enforcement data requests:
“The numbers are still very small for us, like double digits a year, but growing. It will only take one really big case for people to say, like, all right, we really do have to have a different approach here.”
He added that the stakes are higher because users are now treating ChatGPT like a confidant for medical or legal issues. Those conversations resemble professional consultations that normally carry doctor-patient or lawyer-client privilege, but with AI they lack such protection.
“If you can get better versions of those [medical and legal chats] from an AI, you ought to be able to have the same protections for the same reason,” Altman said.
Altman noted that this issue was not originally on his radar but has become a priority after realizing the scale of sensitive data people are sharing.
Q&A: Encryption and ChatGPT
Q: What type of encryption is OpenAI considering for ChatGPT?
A: OpenAI is exploring encryption starting with temporary chats, which are not saved in history or used to train models.
Q: Why are temporary chats the likely first step?
A: They already operate with limited retention — only up to 30 days for safety — though they remain subject to a federal court order requiring retention.
Q: Why is full end-to-end encryption difficult for ChatGPT?
A: Because OpenAI itself is an endpoint in the conversation, and features like long-term memory require access to user data.
Q: How often does OpenAI receive law enforcement data requests?
A: Altman said the number is still very small — double digits per year — but growing.
Q: Why does Altman think encryption is urgent now?
A: He said users share sensitive medical and legal issues with ChatGPT, which convinced him that AI privilege and stronger protections are necessary.
What This Means
The debate over encryption in ChatGPT underscores a growing tension: users are increasingly relying on AI for sensitive, high-stakes conversations, but those chats currently lack the legal protections granted to human professionals.
Altman’s comments reflect a shift inside OpenAI, with privacy protections moving from a background issue to a frontline concern. Altman and OpenAI have begun pushing for new safeguards, including protections against government access, arguing that AI consultations should receive doctor-patient or lawyer-client-style privilege. He predicted that lawmakers, who have been generally receptive to privacy measures, may eventually adopt such protections.
“I don’t know how long it will take,” Altman said. “I think society has got to evolve.”
While full end-to-end encryption remains technically difficult, starting with temporary chats could mark an important first step. The broader question is whether AI companies, courts, and lawmakers can align on a model where privacy, safety, and functionality coexist.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.