• AiNews.com
  • Posts
  • Meta AI Users Are Accidentally Sharing Private Chats Publicly

Meta AI Users Are Accidentally Sharing Private Chats Publicly

A young woman with shoulder-length dark hair sits alone on the edge of a bed in a softly lit bedroom at night, wearing a dark gray sweater. Her face is illuminated by the glow of a smartphone she’s holding in both hands. The phone screen shows a visible chat interface, suggesting a conversation with an AI chatbot. Her expression is quiet and thoughtful, with slightly furrowed brows as she focuses on the screen. The surrounding room includes a nightstand with a lamp, framed photo, and stacked books, creating a sense of personal space and privacy. The scene conveys an intimate, solitary moment that feels private—yet highlights the quiet presence of technology and the underlying theme of digital surveillance.

Image Source: ChatGPT-4o

Meta AI Users Are Accidentally Sharing Private Chats Publicly

Meta’s new AI chatbot app is revealing more than it should—thanks to a public feed where users are unknowingly posting private, often deeply personal conversations. From confessions and relationship questions to sensitive legal concerns, the app's "discover" feed has become an unexpected window into users' lives—and a flashpoint for privacy advocates.

Personal Conversations, Publicly Posted

Meta launched its standalone AI chatbot app in April, aiming to provide conversational, personalized answers on any topic—similar to OpenAI’s ChatGPT or Anthropic’s Claude. But unlike those platforms, Meta’s app features a “discover” feed where users can publish their chats for others to view.

The feed quickly filled with deeply personal exchanges. Users have asked Meta AI how to help a friend come out of the closet, how to congratulate a niece on her graduation, or how to approach dating preferences—sometimes including questionable phrasing like asking “in Asian” if someone likes older men. Others posted questions about Jesus’ divinity, managing picky toddlers, or rebuilding after a breakup.

There’s also more troubling content: people posting about tax evasion, court matters involving real names, or even asking for images of “two 21-year-old women wrestling in a mud bath.” One user openly questioned why so many “super personal” posts were appearing—without realizing their own query was being published too.

A Confusing Interface and Blurred Privacy

According to Meta, chats are private by default and users must tap a share button to post publicly. But the process has led to confusion. The share button does not clearly state where or how posts will appear, or that they may be visible beyond the user’s social circle.

Meta spokesperson Daniel Roberts said users can choose any username on the public feed, but some real identities are visible, and in some cases, the content includes names or sensitive details. Security experts have flagged posts that inadvertently reveal court records, home addresses, or workplace affiliations.

“When you log into Meta AI with your Instagram account, and your Instagram account is public, then so too are your searches about how to meet ‘big booty women,’” one critic noted.

Emotional Support or Data Risk?

Many users turn to chatbots for emotional support or life advice, creating an expectation of privacy that isn’t always met. “We’ve seen a lot of examples of people sending very, very personal information to AI therapist chatbots,” said Calli Schroeder of the Electronic Privacy Information Center. “I think many people assume there’s some baseline level of confidentiality there. There’s not.”

Michal Luria, a researcher at the Center for Democracy and Technology, noted that AI systems can trigger human social instincts, leading people to open up. “We just naturally respond as if we are talking to another person, and this reaction is automatic. It's kind of hard to rewire,” she said.

Meta CEO Mark Zuckerberg has suggested that helping users process difficult conversations is one of Meta AI’s core use cases. “People use stuff that’s valuable for them,” he said in an April podcast. “If you think something someone is doing is bad and they think it’s really valuable, most of the time in my experience, they’re right and you’re wrong.”

A Different Model from ChatGPT or Claude

Meta’s model departs sharply from rivals like Claude and ChatGPT, which do not publish user conversations. While platforms like Midjourney or OpenAI’s Sora allow public sharing of AI-generated images, they don’t expose personal chat history.

Meta’s “discover” feed, by contrast, reads like a live stream of search histories and private journals—filled with questions, audio clips, and AI-generated images. Among the lighter fare are humorous or politically charged creations, such as Donald Trump in a diaper or the Grim Reaper on a motorcycle. But alongside those are vulnerable, serious queries that users may not have realized were public.

Few Guardrails, Growing Scrutiny

There are few legal requirements forcing companies to set clear privacy boundaries for chatbots. In fact, Congress is currently considering legislation that would prevent states from passing new AI regulations for the next decade.

Meta is not alone in facing questions about data handling. OpenAI, too, has faced scrutiny over how its memory feature stores and responds to user data. While the company is working to give users more control, it is also under a court order—stemming from a lawsuit filed by The New York Times—to retain all customer data, including deleted chats, despite its usual policy of erasing them after 30 days.

As these tools grow more powerful and personal, users—and regulators—are beginning to demand clearer boundaries.

What This Means

Meta’s decision to merge a private-feeling chatbot with a public content feed has created a privacy minefield. This isn’t just a design flaw—it highlights broader concerns about how generative AI platforms handle personal data, user trust, and consent.

The issue goes beyond one app. Across the industry, companies are introducing memory features, personalization tools, and chat histories designed to make AI feel more helpful—and more human. But without clear disclosures or stronger defaults, users may not realize when their conversations are being saved, analyzed, or made public. As the line blurs between tech product and confidant, the stakes for user awareness grow much higher.

As AI chatbots grow more personal, the real issue may not be what users are willing to share—but whether they truly understand who’s listening, storing, and learning from their words.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.