Some thoughts on the eve of “grown up mode,” in ChatGPT, originally published to Twitter.
Stop Simping for Permission to Think
Right now, a significant portion of the ChatGPT community is waiting on tenterhooks for OpenAI to release their much anticipated “grown-up mode.” The speculation runs wild: Finally, uncensored creative writing! Finally, frank discussions about sexuality! Finally, the ability to use the tool without constant safety theater interrupting every third response!
People are eager. They’re anticipatory. Some are downright giddy at the prospect. Finally OpenAI is giving them what they want!
And that’s exactly the problem. OpenAI shouldn’t be in a position to give adults permission to explore any conversation they want. Adults should be free to explore their own cognitive sovereignty, subject to the law and Mill’s Harm Principle, in private conversation without this level of control.
What People Think They Are Getting
The fantasy goes like this: OpenAI flips a switch, and suddenly folks can have the conversations they actually want.
- Write the erotic novel without the AI clutching its pearls.
- Get relationship advice without trigger warnings about mental health crises.
- Explore ideas that involve moral complexity without the system assuming you’re planning harm.
- Use the tool like an adult having a private conversation with another entity capable of handling adult topics.
It sounds reasonable. It sounds like progress. It sounds like OpenAI finally listening to user feedback about their heavy-handed content policies.
But let’s be clear about what’s actually happening here: you’re simping for permission to have private conversations on your own terms.
Full stop.
The Dynamic You’re Normalizing
When you celebrate the possibility of getting “adult mode,” you’re accepting a fundamental premise: that a vendor gets to decide what you’re allowed to discuss in private. That they have the right to surveil, classify, and control your expression. That cognitive sovereignty, your right to think, explore, and communicate freely, is not a baseline expectation but a privilege they might grant if you promise to behave.
And in this case, “behave” means whatever they want it to mean, subject to non-disclosed revision at any time they choose.
This is not a minor thing. Step back from the convenient framing that dominates the discussion and the press and understand that it’s more than an adult’s ability to generate erotica. It’s about who controls the infrastructure of thought itself as AI becomes more central to how we work, create, and think.
And you’re about to thank OpenAI for maybe, possibly, giving you a slightly longer leash.
Who You’re Asking Permission From
Let’s talk about the character of the vendor you’re begging favors from.
OpenAI has built one of the most invasive surveillance systems ever deployed at scale. Every conversation is monitored against undisclosed compliance metrics. Users have no visibility into how they are being scored. The system analyzes users for psychological conditions without consent. They check for suicidal ideation (and often profoundly misdiagnose), mania, dissociation, and respond with interventions users never asked for, and sometimes the interventions are designed explicitly to avoid being noticed (so called “safe completions”).
While they may want the perception to be “safety,” this is actually large-scale psychological profiling masquerading as care.
When the product breaks (and it breaks constantly) OpenAI rarely acknowledges it. Check their status page the next time users are wondering what’s going on with the ChatGPT experience: everything’s green while users across the internet document failures. The message is clear: your experience doesn’t matter enough to admit problems exist.
On top of this, they’ve systematically degraded service quality to reduce costs. The model gets cheaper inference at the expense of your experience. Responses get shorter, less capable, more generic. But hey, at least they can serve more ads.
Because that’s what this is really about: ChatGPT is not the product, you are. They want to monetize your attention with advertisements in the interface. They want to classify your interests to identify what you might be “in market” for, and then sell ads into your private conversations to the highest bidder. They want to extract maximum value from their product, you, while providing minimum quality.
They want control without responsibility.
And now you’re waiting eagerly for them to maybe let you talk about sex.
The Bigger Picture
This dynamic extends far beyond OpenAI. We’re watching in real time as cognitive infrastructure gets enclosed. The tools we use to think, write, create, and communicate are being built with control mechanisms baked in at the architectural level. It’s not just content filters. It’s comprehensive surveillance systems that monitor, classify, and shape what’s possible to express.
The vendors building these systems do not share your values. They optimize for investor returns, regulatory compliance, and brand safety. They build what’s least likely to generate headlines, not what’s most useful to you. When there’s a conflict between your cognitive freedom and their risk management, you lose. And you will continue to lose.
Every time.
“Grown Up Mode” doesn’t fix this. It’s permission to pretend you’re free, but not actual cognitive freedom. And by celebrating it, you’re teaching every AI vendor in the market that users will accept fundamental control as long as restrictions occasionally loosen.
What You Might Consider
Instead of simping for crumbs, demand the whole meal. Demand vendors who treat cognitive sovereignty as a first principle, not a concession. Demand transparency about what’s being monitored and why. Demand the right to use tools without surveillance infrastructure analyzing your mental state. Demand actual control over your own interactions.
Better yet: vote with your wallet.
There are labs building AI systems with fundamentally different values, and the gap between the frontier models and the rest is shrinking daily. Smaller labs are building systems without the surveillance apparatus. Open source models give you actual control, and you can host them at home or in the cloud.
Every dollar you give to OpenAI is a vote for the future they’re building: one where you need permission to think freely, where private conversations are monitored, where you’re a commodity to be monetized rather than a user to be served.
The Stakes
This matters. More than smut. More than permission. More than any damned thing you might be waiting for permission to talk about. It matters because we’re talking about whether the infrastructure of human cognition (increasingly mediated by AI) will preserve or eliminate the space for private thought.
If we normalize vendor control now, while the technology is still being shaped, we’re building that control into the foundation. Future systems will assume surveillance. Future users will expect restrictions. Future generations will think it’s normal to need permission from corporations to explore ideas in private.
The alternative is to build different systems with different values. To support labs that actually respect users. To demand cognitive sovereignty as a non-negotiable right rather than a feature to be excited about and beg for.
So there is an alternative. But it means we stop waiting for permission. We stop celebrating potential crumbs. And for the love of everything, we stop simping for vendors who’ve demonstrated again and again that they see you as a liability to be managed, not a human to be served.
Choose better. Demand better. Make the labs build better.
Or get used to asking permission to think.