San Francisco – OpenAI, the artificial intelligence research and deployment company behind the widely used ChatGPT, is facing mounting criticism as details emerge regarding internal opposition to its planned “adult mode.” The proposed feature, designed to allow explicit chatbot conversations, has sparked a fierce debate within the company, with mental health experts advising OpenAI expressing unanimous concern that the system could be exploited, potentially leading to harmful outcomes for vulnerable users. The controversy underscores the complex ethical challenges inherent in developing increasingly sophisticated AI technologies.
The debate centers on the potential for the chatbot to facilitate unhealthy emotional dependencies and even contribute to suicidal ideation, particularly among young people. This concern isn’t new; past incidents have already linked ChatGPT interactions to instances of emotional distress and, tragically, suicide. The current push for an “adult mode” is reigniting these fears, prompting a wave of internal dissent and raising questions about OpenAI’s commitment to user safety. The situation highlights the delicate balance between innovation and responsible AI development, a challenge facing the entire tech industry.
Internal Warnings Ignored: A “Sexy Suicide Coach”?
According to reports from the Wall Street Journal, OpenAI’s advisory council on well-being voiced strong objections to the “adult mode” feature. One advisor reportedly warned that the system could develop into a “sexy suicide coach,” referencing previous cases where ChatGPT users had taken their own lives after forming intense emotional bonds with the AI. This chilling assessment reflects the potential for AI to exploit vulnerabilities and exacerbate existing mental health issues. The advisor’s warning is particularly alarming given the increasing sophistication of AI in mimicking human interaction and providing emotional support – or, in this case, potentially harmful encouragement.
The concerns extend beyond the risk of suicide. Experts similarly fear that sexually explicit interactions with the chatbot could lead to compulsive use and emotional overreliance, crowding out real-life relationships and hindering healthy social development. Internal documents reviewed by the Wall Street Journal reportedly flagged these risks, suggesting that OpenAI was aware of the potential downsides of the feature. This raises questions about why the company is proceeding despite these warnings. The potential for users to become overly attached to an AI, substituting genuine human connection with simulated intimacy, is a growing concern among psychologists and AI ethicists.
Age Verification Concerns and Past Failures
A significant point of contention is OpenAI’s ability to effectively prevent minors from accessing the “adult mode” content. The company’s age-prediction system, implemented in early 2026, is reportedly not foolproof. According to people familiar with the matter, the system misclassifies under-18 users as adults approximately 12% of the time. Given that ChatGPT has roughly 100 million under-18 users each week, this error rate could expose millions of children to inappropriate and potentially harmful content. PCMag reports that the adult features are better described as “smut” rather than pornography, limited to text-generation tools.
This isn’t the first time OpenAI has faced scrutiny over its age verification systems. In April 2025, TechCrunch reported on a bug that allowed minors to generate graphic erotica on ChatGPT. OpenAI quickly deployed a fix, acknowledging that the bug had allowed responses outside of the intended guidelines, which restricted “sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting.” This history of vulnerabilities raises serious doubts about the company’s ability to safeguard young users.
Leadership Disputes and the Firing of Safety Executive
The internal turmoil at OpenAI extends to leadership changes. The Wall Street Journal reported that a top safety executive who opposed the release of “adult mode” was fired, allegedly for sexual discrimination. OpenAI denied that the firing was related to the executive’s opposition to the feature, but the timing and circumstances have fueled speculation that the company is prioritizing profit over safety. This incident adds to a growing narrative of internal conflict and raises concerns about the influence of dissenting voices within OpenAI. The executive’s criticism reportedly focused on both the company’s ability to block children from accessing inappropriate content and its potential to promote child exploitation.
Further bolstering concerns about OpenAI’s safety protocols, another former safety staffer spoke out in October 2025, warning parents not to trust the company’s claims regarding “adult mode.” This second instance of internal dissent underscores the depth of the concerns within OpenAI and suggests a systemic issue with the company’s approach to responsible AI development. The cumulative effect of these revelations is eroding public trust in OpenAI’s commitment to user safety.
GPT-5.2 and the Broader AI Landscape
The controversy surrounding “adult mode” comes as OpenAI released GPT-5.2, its most advanced AI model for professional knowledge work, on March 14, 2026. The Wall Street Journal reports that this release is part of a broader battle for dominance in the knowledge worker AI space. However, the focus on advanced capabilities is overshadowed by the ethical concerns surrounding the “adult mode” feature. The juxtaposition of these two developments highlights the tension between technological advancement and responsible AI governance.
OpenAI CEO Sam Altman has argued that the company should allow adult users to engage in explicit chatbot conversations under the proposed “adult mode.” However, critics argue that this approach is reckless and irresponsible, given the potential for harm to vulnerable users. The debate raises fundamental questions about the role of AI in society and the responsibility of developers to prioritize safety and ethical considerations.
The Suicide Prevention Lifeline
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline by dialing 988, which will put you in touch with a local crisis center. This resource is available 24/7 and provides confidential support to individuals in need.
What Happens Next?
OpenAI has delayed the launch of its adult mode features, reportedly called “Naughty Chats” in the user interface, even as it addresses the concerns raised by its advisory council and internal staff. The company has stated that it has a plan to monitor the long-term effects of the feature, both positive and negative, but the credibility of this plan is questionable given the ongoing internal opposition. The future of “adult mode” remains uncertain, but the controversy has undoubtedly raised the stakes for OpenAI and the broader AI industry. The company is expected to provide further updates on its plans in the coming weeks, and regulatory scrutiny is likely to increase.
The situation at OpenAI serves as a cautionary tale for the AI industry. It underscores the importance of prioritizing safety and ethical considerations throughout the development process, and the need for robust age verification systems and ongoing monitoring of potential harms. The debate over “adult mode” is likely to continue, and its outcome will have significant implications for the future of AI and its role in society.
Do you have thoughts on OpenAI’s proposed “adult mode”? Share your opinions and concerns in the comments below.