
A parent manages ChatGPT’s new parental control settings from a tablet at home, enabling age-appropriate rules and safety features for teens. Image Source: ChatGPT-5
OpenAI adds parental controls and expert guidance to ChatGPT
Key Takeaways: OpenAI parental controls and ChatGPT safety features
OpenAI will launch parental controls for ChatGPT within the next month, giving parents tools to guide teen usage.
Parents will be able to link accounts, set age-appropriate rules, disable features, and receive distress notifications.
Reasoning models like GPT-5-thinking will soon handle sensitive conversations, such as when the system detects signs of acute emotional distress.
An Expert Council on Well-Being and AI and a Global Physician Network are advising OpenAI on mental health and safety design.
The changes are part of a 120-day initiative to strengthen ChatGPT’s role in supporting well-being, especially for teens.
Parental Controls: New Safeguards for Teen Users
OpenAI says its work to make ChatGPT as helpful as possible is constant and ongoing, especially in moments of mental or emotional distress. Guided by expert input, the company continues to refine how its models recognize and respond in sensitive situations.
OpenAI will soon roll out parental controls for ChatGPT, marking a major step in how families can manage AI use at home. Parents will be able to link their accounts with their teen’s (minimum age 13) through a simple email invitation and set age-appropriate rules that are enabled by default.
The controls allow parents to:
Control how ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default.
Disable features such as memory and chat history.
Receive notifications if the system detects signs of acute distress, a feature guided by expert input to balance trust between parents and teens.
These parental controls build on existing features such as in-app reminders encouraging breaks during long sessions.
Partnering with Experts on Well-Being and AI
To shape these updates, OpenAI is working with two advisory groups:
The Expert Council on Well-Being and AI, which brings together specialists in youth development, mental health, and human-computer interaction to guide future safeguards and to support people's well-being.
The Global Physician Network, a pool of more than 250 physicians across 60 countries who have already contributed to efforts like the health bench evaluations. More than 90 physicians across 30 countries — including psychiatrists, pediatricians, and general practitioners — have already contributed research on how AI should behave in mental health contexts, safety research, and model training.
Together, these groups provide expertise to help OpenAI design safety rules, evaluate health-related capabilities, and inform product decisions and research. OpenAI is adding more clinicians and researchers to their network especially in areas like eating disorders, substance use, and adolescent health.
Reasoning Models for Sensitive Conversations
OpenAI is also routing sensitive conversations to its reasoning models for better responses. These models — including GPT-5-thinking and o3 — use a method called deliberative alignment, which allows them to spend more time reasoning before answering.
A new real-time router will direct certain conversations, such as those involving signs of acute distress, to a reasoning model even if the user initially selected a different one. OpenAI says this approach helps ensure more consistent adherence to safety guidelines and resilience against adversarial prompts.
120-Day Roadmap for Safer ChatGPT
These updates are part of OpenAI’s 120-day initiative focused on four areas:
Expanding interventions for people in crisis.
Making it easier to reach emergency services.
Enabling connections to trusted contacts.
Strengthening protections for teens.
The company emphasized that this effort is only the beginning. OpenAI plans to continue evolving its approach throughout the year with the goal of making ChatGPT “as helpful as possible.”
Q&A: ChatGPT Parental Controls and Safety Updates
Q: What new parental controls are coming to ChatGPT?
A: Parents will be able to link accounts, set age-appropriate rules, disable features, and receive distress notifications for teens.
Q: When will parental controls be available?
A: Within the next month, as part of OpenAI’s current 120-day initiative.
Q: What role do reasoning models play in safety?
A: Reasoning models like GPT-5-thinking will handle sensitive conversations, offering more thoughtful and guideline-compliant responses.
Q: Who is advising OpenAI on well-being and mental health?
A: The Expert Council on Well-Being and AI and the Global Physician Network provide expertise on youth, health, and safety.
Q: What is the focus of the 120-day initiative?
A: Strengthening crisis interventions, emergency access, trusted contacts, and protections for teen users.
What This Means: AI safety, families, and teen protections
The addition of parental controls marks a significant shift in how OpenAI positions ChatGPT for families and younger users. By giving parents more direct oversight, the company is addressing one of the most common concerns about AI adoption: how teens interact with generative models.
At the same time, routing sensitive conversations to reasoning models and consulting with experts shows that OpenAI is embedding safeguards deeper into its infrastructure. This dual approach — combining product features with expert oversight — signals that AI safety is no longer an afterthought, but a core design principle.
If successful, these changes could help redefine ChatGPT from a general-purpose chatbot into a tool that parents, educators, and teens view as not only powerful, but also trustworthy.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiroo’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.