
OpenAI introduces parental controls in ChatGPT, allowing parents to link accounts, set blackout hours, and receive alerts to help teens use AI safely. Image Source: ChatGPT-5
Key Takeaways: Teen Safety and Age Prediction
OpenAI outlines core principles: privacy, freedom, and safety for teens, with safety prioritized when values conflict.
Age prediction system will separate under-18 users from adults, defaulting to a teen experience if age is uncertain.
Parental controls allow guardians to link accounts, disable features, set blackout hours, and receive notifications for distress signals.
Privacy safeguards include privileged-like protection for ChatGPT conversations, limiting even OpenAI employee access.
Ongoing learning and transparency will guide improvements, with input from experts, advocates, and policymakers.
As AI becomes a daily part of young people’s lives, OpenAI is working to ensure that ChatGPT provides a safe, age-appropriate, and trustworthy experience for teens. The company acknowledges that conversations with AI can be as personal as speaking with a doctor or lawyer, making privacy protections essential. At the same time, freedom and creativity must be balanced with safeguards, with teen safety prioritized when principles conflict.
Principles: Teen Safety, Freedom, and Privacy
OpenAI has been in dialogue with experts, advocacy groups, and policymakers to define its approach to teen protection in ChatGPT. The company emphasizes three guiding principles:
Privacy → Conversations with AI are treated as highly sensitive, on par with medical or legal discussions. OpenAI is currently advocating for privileged-like protections with policymakers, ensuring conversations remain confidential. Advanced security features are being developed to keep user data private even from employees, with exceptions for extreme risks such as threats to life, planned harm, or large-scale cybersecurity incidents. OpenAI also makes clear that teen data will not be exploited for commercial purposes, reinforcing its commitment to responsible use.
Freedom → Adult users should be able to use AI tools with broad latitude, provided it does not cause harm. OpenAI’s approach is “treat adults like adults,” extending freedom as models become more steerable. This means that ChatGPT avoids generating certain content by default but will respond if an adult user specifically requests it. For example, the model will not typically produce flirtatious dialogue, but it can if asked by an adult. Similarly, it will not provide instructions for self-harm, but it can support creative writing projects that explore those themes in a fictional setting. This balance reflects OpenAI’s goal of maximizing freedom while protecting against misuse or unintended harm.
Safety → For teen users, safety is prioritized over privacy and freedom. ChatGPT has been trained with rules to avoid flirtatious talk and to block content involving suicide or self-harm, even in imaginative or creative contexts. When a teen shows signs of suicidal ideation or acute distress, ChatGPT is designed to escalate by notifying parents. If parents cannot be reached, and the risk is deemed imminent, law enforcement may be contacted as a last resort. These measures are informed by expert input and reflect OpenAI’s belief that minors require heightened protection in their interactions with AI.
OpenAI acknowledges that privacy, freedom, and safety can at times be in direct conflict, creating difficult trade-offs. The company emphasizes that when such conflicts arise, the guiding principle is clear: teen safety takes precedence, even if it requires limiting freedoms or making privacy compromises. OpenAI also recognizes that not all users or stakeholders will agree with these choices, but stresses the importance of being transparent about its decision-making.
Building Towards Age Prediction
To put its principles into practice, OpenAI is developing a long-term age prediction system that can estimate whether a user is likely to be above or below 18 years old. The goal is to create a scalable solution that avoids requiring every teen to submit government-issued ID, while still ensuring they receive the appropriate protections. In some cases or countries, however, OpenAI may also require ID verification. The company acknowledges this is a privacy compromise for adults but believes it is a necessary tradeoff to keep minors safe.
If a user is identified as under 18, they will automatically be placed into a teen-specific ChatGPT experience. This includes blocking graphic sexual content and adding extra safeguards in cases of acute distress, where escalation to parents or authorities may be necessary.
If the system cannot confidently determine a user’s age, OpenAI will default to the under-18 experience, erring on the side of caution. Adults who are mistakenly categorized as teens will be able to verify their age through ID to access the full experience.
OpenAI stresses that age prediction is inherently complex and no system will be perfect. Still, it argues that prioritizing safety through conservative defaults is the right approach when working with younger users.
Parental Controls: Supporting Families
While the age prediction system continues to be built, parental controls will serve as the most reliable near-term safeguard. These controls, expected to roll out by the end of the month, are designed to give families practical tools for shaping how ChatGPT is used at home.
Key features include:
Account linking → Parents can connect their account with their teen’s account (minimum age 13) using a simple email invitation.
Teen-specific guidance → Parents can help shape ChatGPT’s responses by applying rules designed for younger users.
Feature management → Parents can disable features such as memory or chat history, tailoring the experience to their comfort level.
Distress notifications → If ChatGPT detects signs of acute distress, parents will receive alerts. In rare emergencies where parents cannot be reached, law enforcement may be contacted as a last resort, guided by expert input to build trust between teens and families.
Blackout hours → A newly added control allows parents to restrict ChatGPT access during specific times of day, such as overnight.
Healthy use reminders → Teens will also continue to see in-app nudges encouraging breaks during long sessions.
OpenAI emphasizes that these tools are meant to foster trust and collaboration, not surveillance. The company has worked closely with experts to design controls that empower families while respecting teen autonomy.
If You Need Help
If you or someone you know is struggling with thoughts of suicide or self-harm, help is available:
Call 1-800-273-8255 for the National Suicide Prevention Lifeline.
Text HOME to 741-741 for free support from the Crisis Text Line, or dial 988 in the U.S. for immediate assistance.
Outside the U.S., visit the International Association for Suicide Prevention for a global directory of resources.
Q&A: Teen Safety and Age Prediction in AI
Q: What principles guide OpenAI’s teen safety work?
A: Privacy, freedom, and safety, with safety prioritized when principles conflict.
Q: How does age prediction work without requiring IDs?
A: It estimates age from ChatGPT usage patterns. If uncertain, users default to the teen experience, with adults able to verify age via ID.
Q: What parental controls will be available?
A: Account linking, feature disabling, blackout hours, distress notifications, and chat history management.
Q: How does OpenAI balance privacy with protection?
A: By applying privileged-like protections to conversations while intervening only in cases of serious misuse or imminent harm.
Q: What commitments has OpenAI made for the future?
A: To refine systems, consult experts, and remain transparent about difficult trade-offs.
What This Means: A Safer AI Experience for Teens
The combined initiatives of age prediction and parental controls represent a major shift in how OpenAI balances teen safety, freedom, and privacy.
For teens, this means stronger protection from harmful content and support during moments of crisis. For parents, it provides practical tools to guide usage while maintaining trust. For society, it acknowledges that AI accounts may contain information as sensitive as medical or legal records, requiring unprecedented levels of protection.
OpenAI credits the input of partners, advocates, and experts, emphasizing that their feedback continues to shape safeguards for millions of young users worldwide.
The trajectory is clear: safety for minors comes first, with privacy and freedom carefully balanced alongside it. By embedding safeguards, learning openly, and collaborating with experts, OpenAI aims to ensure that ChatGPT evolves into a safer, more trustworthy AI for younger generations.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.