• AiNews.com
  • Posts
  • Sam Altman Warns ChatGPT Isn’t Legally Confidential for Therapy Conversations

Sam Altman Warns ChatGPT Isn’t Legally Confidential for Therapy Conversations

Sam Altman says ChatGPT lacks the legal protections of a therapist, warning users not to treat the AI like a confidential support tool.

A realistic image of a woman sitting at a desk in a well-lit room, looking at a laptop screen that displays a ChatGPT conversation window. The chat appears emotionally sensitive, suggesting personal use of AI for support. Behind her on the wall is a faint but visible outline of a judge’s gavel with a red diagonal line crossing through it—signifying the absence of legal confidentiality or court protection for AI conversations. The setting is modern, calm, and clearly communicates the privacy risks associated with using ChatGPT as a therapist or life coach.

Image Source: ChatGPT-4o

Sam Altman Warns ChatGPT Isn’t Legally Confidential for Therapy Conversations

Key Takeaways:

  • Sam Altman cautioned that ChatGPT lacks legal confidentiality, even when used as a therapist or life coach.

  • Sensitive chats could be subpoenaed in legal cases, since AI conversations are not protected like doctor–patient or lawyer–client exchanges.

  • OpenAI is appealing a court order that would require it to retain hundreds of millions of user conversations for legal discovery.

  • The lack of privacy protection could slow user adoption, especially for emotional or personal use cases.

  • Altman says new laws are needed to treat AI chats with the same privacy as conversations with licensed professionals.

OpenAI CEO Sam Altman warned that ChatGPT users should not assume their AI conversations are private—especially when using the chatbot as a therapist, life coach, or emotional support tool.

In an appearance on This Past Weekend with Theo Von, Altman said that users—particularly young people—are already turning to ChatGPT for deeply personal conversations, often about relationships, anxiety, or life decisions.

“People talk about the most personal sh** in their lives to ChatGPT,” Altman said. “People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’”

But there’s currently no doctor–patient privilege or legal confidentiality for AI conversations. That means ChatGPT users do not have the same legal protections they would if speaking to a human therapist, doctor, or attorney.

“Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”

When asked how AI fits into the current legal system, Altman said the lack of a clear regulatory framework means conversations with AI still don’t carry legal confidentiality protections.

Altman said this gap could have serious consequences in court. If someone shares sensitive information with ChatGPT and later becomes involved in a lawsuit, OpenAI could be compelled to produce those chats.

“I think that’s very screwed up,” Altman said. “We should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago.”

The concern comes amid broader legal pressure on OpenAI. The company is currently fighting a court order in its ongoing lawsuit with The New York Times. That order would require OpenAI to store and preserve conversations from hundreds of millions of users, except those using ChatGPT Enterprise, which has different privacy protections.

In a statement on its website, OpenAI said it is appealing the court order, which it described as “an overreach.” The company warned that if courts can override its internal privacy decisions, it could create a precedent for expanded legal discovery and law enforcement access to user conversations. While tech companies are already regularly subpoenaed for user data in criminal cases, the prospect of storing and producing millions of AI chat logs raises new legal and ethical concerns around digital privacy.

A Broader Privacy Problem

The legal uncertainty reflects a growing mismatch between how people use AI tools and how those tools are treated under the law. While platforms like ChatGPT are being used for deeply personal guidance, they aren’t covered by privacy laws like HIPAA or legal privilege statutes.

That distinction could create new vulnerabilities—especially in politically charged contexts. After the Supreme Court overturned Roe v. Wade, some users began switching to encrypted or private health-tracking apps amid fears that digital data could be used to prosecute individuals in states with strict abortion laws.

Altman acknowledged that many users—like podcast host Theo Von—are hesitant to use ChatGPT at all due to privacy concerns.

“I think it makes sense… to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity,” he said.

Q: Is ChatGPT a safe place to share personal or emotional information?
A: No. Conversations with ChatGPT do not carry legal confidentiality, even if users treat it like a therapist or coach.

Q: Could my ChatGPT chats be used against me in court?
A: Yes. Without legal protections, your chats could be subpoenaed, especially if stored for legal reasons or under a court order.

Q: What is OpenAI doing to protect user chats?
A: OpenAI is appealing a court order that would force it to retain vast amounts of chat data. It currently deletes chats by default after 30 days—unless required to preserve them.

Q: What kind of privacy laws are needed?
A: Altman argues that AI chats should have protections similar to doctor–patient or attorney–client privilege, though no such laws exist today.

What This Means

ChatGPT’s growing role in emotional and psychological support comes with an important legal gap: no confidentiality protections. That means users who rely on AI for personal advice may be exposing their data without realizing it.

As AI systems become more embedded in everyday life, Sam Altman is calling for urgent legal reforms to bring AI conversations in line with professional privacy standards. Even companies at the center of AI development are now recognizing the need for clear legal safeguards.

Yet that demand also underscores a broader tension: AI firms have pushed back against regulation in other areas, even as they now advocate for new rules around data confidentiality. That contradiction raises a difficult question: Can tech companies credibly call for guardrails only when it suits them?

Until those legal privacy protections are in place, users should be aware that sensitive conversations with AI remain accessible to courts and law enforcement—with no current legal framework to shield them.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.