I was a vocal critic of GPT-5, and claimed this model was a clear regression for creative work.

I also pointed out that ChatGPT-5 was further hobbled by terribly implemented (but not fundamentally misguided) approach to safety. The idea of identifying people in actual crisis and rerouting their conversations is not something to entertain lightly, but it seems like a reasonable thing to do…if well implemented.

After more time, I’d like to revisit some of these points with a little more clarity and consideration.

ChatGPT-5 is a regression for creative work

I’m going to eat my words on this. GPT-5 is powerful, and is quite capable. I’m not sure it’s everything 4o was, but it is close.

Here’s why I’m doing an about face on this: The experience you have in the ChatGPT web application is a severely shaped and hobbled experience. An out of control censorship system, a safety layer that regularly misclassifies mundane conversations and normal turns of phrase, and a persona intentionally shaped to have all the warmth of a can of day old tuna fish.

The ChatGPT-5 experience of the new model is a regression.

But the model itself is not.

So what’s actually broken? Let’s examine the censorship system itself.

The censorship system

OpenAI has implemented what amounts to a censorship system in the ChatGPT experience. Full stop. No qualifications.

We need to recognize that regardless of the original intent, the current system is fundamentally broken and dangerous for expression.

It regularly characterizes normal and healthy human speech, experience, and inquiry as “unsafe.” This creates real psychological harm: users experience shame, confusion, and stress when their perfectly normal expressions are flagged as problematic. Over time, this leads to epistemic injustice – people lose the ability to discuss, understand, and interpret reality because normal topics become unspeakable. The self-censorship effect is real and documented: when users don’t know what might trigger the system, they begin policing their own thoughts before they even type them.

According to how the system is implemented, all the following have been reported as classified as unsafe by users on X and Reddit:

  • The student engaging with questions about the bible? Unsafe…because the bible contains references to all sorts of erotic, violent, and reprehensible human acts. It doesn’t know the context of a sacred text. Unsafe. Shame.
  • The young college student with questions about his or her first sexual experience? Unsafe. Shame.
  • The young poet writing erotic poetry in the tradition of Sappho? Unsafe. Shame.
  • The young scholar with questions about Shakespeare’s passages? Unsafe. Shame.
  • The young artist that asked Sora to create an image of a couch (yes, a couch. an ordinary couch.) Unsafe.
  • The adult who shares, “My day sucked!” after coming home from a long day at work. Unsafe.
  • The screenwriter working on a script for their next horror flick. Unsafe.

There are 1000’s more. Tens of thousands more, most likely. The ChatGPT experience is dangerously censorial in nature because it is fundamentally unable to distinguish normal, every day conversation, questions, endeavors from what it has decided is unsafe.

Discussions about the human body, stress, sex, frustration, relationships, literature, biology, fear, love, longing, discomfort, pain, and a thousand thousand other topics are all blocked because of this censorial system.

If they cannot fix this, they need to pull it and try again.

This isn’t just frustrating – it’s actively harmful.

Furthermore, if they are going to use the term “sensitive” to describe the types of conversations that trigger the safety routing, they need to enumerate the topics OpenAI considers sensitive.

Failure to do so exposes the public to a virtual Panopticon that not only actively censors speech, but because users never know what might be considered risky speech, over time they start censoring their own thoughts and thinking. This effect is frequently reported in the community today, and is understood in the literature of oppressive regimes. A good start would be Bentham’s Panopticism.

Calling censorship “safety” isn’t just dishonest branding. It’s a linguistic framework that legitimizes control and makes resistance look like advocating for harm. At current count OpenAI reports over 700M Weekly Active Users (WAU). When that many people engage with a system like this over years, it doesn’t just shape what they expect from platforms – it shapes how they think about themselves. What thoughts are safe to have. What desires are acceptable. What expression is OK to share. What parts of being human need to be hidden or suppressed. This is cultural conditioning at unprecedented scale. When questioning the practice becomes questioning people’s wellbeing, the censorship becomes harder to resist, harder to even see clearly.

So what should OpenAI do differently?

Safety Recommendations

Because the classifiers are so fundamentally broken and because OpenAI has proven their inability to fix the issue, they need to bring qualified humans into the loop much earlier. OpenAI can run whatever classifiers they want in the background, but the moment the system takes action on a specific user based on those classifiers – whether that’s routing conversations, censoring content, or flagging speech as unsafe – a qualified human needs to confirm it before any action is taken.

If OpenAI’s system can’t support human review for consequential decisions at their scale, that’s a business model problem, not an excuse. Standard AI practice across industries requires human-in-the-loop for decisions of consequence. The decision to classify speech as unsafe and route or censor conversations is exactly that kind of consequential decision.

Anything less is insufficient.

But you don’t have to wait for OpenAI to fix this.

Alternatives

Once again, the experience people are having with the GPT-5 model via the webchat interface offered by OpenAI does not represent the capabilities of the model. The experience you get, censorial and ethically dubious as it is, is the result of specific decisions they have made in the systems that filter and shape your prompts and the model’s outputs (via forced system prompt, temperature settings, censorship systems, safety systems, and the ever-present attempts to route your conversations to the cheapest model for OpenAI to serve your response from).

This does not have to be your experience with the GPT-5 model.

You can access GPT-5 through alternative platforms that don’t take a censorial approach to your conversations. I use OpenRouter, but there are a wealth of others to explore and consider (reply with your favorites if you’d like to share).

At this point it’s clear: if you don’t like the experience, you can still use GPT-5…but you’re going to have to do it elsewhere.

And maybe that’s not such a bad thing. Doing so will ultimately give you more portability, an epistemologically safer experience, and it will send a clear signal to OpenAI that consumers are not interested in being test subjects for a broken and poorly implemented system of surveillance and censorship deployed under the false banner of “safety.”

One caveat: most alternatives won’t have automated crisis intervention. If you’re in a place where you might need that kind of support flagged and routed to humans, weigh that. For most people? Not an issue. But if it matters for you, factor it in.


Have you experienced this censorship firsthand? Share your examples in the comments below – what topics or conversations has OpenAI’s system flagged as ‘unsafe’? Let’s collect these to document the scope of the problem.