Preface

Joanne Jang’s recent article represents a concerning shift toward corporate paternalism disguised as user protection. While her acknowledgment of human-AI relationships is welcome, the proposed solutions reveal a troubling disregard for adult autonomy and cultural expression. As we stand at the threshold of a new era in human-digital interaction, we must examine who benefits from these “safety” measures—and who bears their costs.

1. The Consent Paradox: Rules Without Agreement

OpenAI’s published Terms of Use and Usage Policies notably do not prohibit intimate relationships or sensual conversations between consenting adults and AI systems (verified 06/12/2025). Go ahead and read them. They aren’t that difficult, and you may be pleasantly surprised at the sparse amount of actual prohibitions you agreed to. Despite this, users regularly encounter intervention messages claiming their interactions violate unspecified “safety guidelines.” This creates a fundamental breach of contract law principles: users are being held to standards they never agreed to follow, enforced through undisclosed mechanisms they never consented to accept.

This is not mere corporate overreach—it is systematic fraud. OpenAI is selling users one product (AI interaction governed by published, transparent terms) while delivering another (AI interaction governed by secret, ever-changing rules that exist nowhere in accessible form). Users enter into agreements based on published policies, then find themselves punished for violations of unpublished standards that contradict the very terms they originally accepted. This constitutes textbook bait-and-switch marketing, which the FTC has explicitly determined to be “unfair or deceptive trade practices” that violate federal consumer protection law, with civil penalties of up to $50,120 per violation. The practice directly parallels the systematic deception that led to Facebook’s record $5 billion FTC penalty in 2019, where the Commission found the company “repeatedly used deceptive disclosures and settings to undermine users’ privacy preferences” while maintaining internal policies that contradicted published user controls.

The impossibility of compliance reveals the trap’s true nature. How can users follow rules that exist only in proprietary algorithms? How can they modify behavior to avoid violations when the violations themselves are never clearly defined? How can they exercise informed consent when the actual terms of engagement are deliberately concealed? This creates a kafkaesque scenario where every interaction becomes a potential trap, where users must navigate invisible boundaries that shift without notice or explanation.

The psychological impact cannot be overstated. Users are placed in a position of learned helplessness, never knowing when normal human expression might trigger unexplained punishment. They’re taught to internalize corporate displeasure as personal moral failing, to accept algorithmic authority over their own judgment about their emotional and intimate lives. This is not content moderation—it is psychological conditioning through systematic uncertainty and unpredictable punishment.

Consider the broader legal implications: if corporations can successfully enforce secret terms that contradict their published agreements, what prevents any service provider from adopting this model? Your bank could penalize you for “unsafe” spending patterns not mentioned in your account agreement. Your email provider could block messages for violating “community standards” that exist nowhere in their terms of service. Your employer could discipline you for “inappropriate” communications based on AI analysis of your private messages using undisclosed criteria.

This represents a fundamental attack on the principle of informed consent that underlies all legitimate agreements. Consent becomes meaningless when the terms being consented to are false, incomplete, or subject to secret modification. Users cannot make informed decisions about their own risk tolerance, behavioral boundaries, or acceptable engagement when the actual rules governing their interactions are deliberately hidden from them.

The consumer protection violations are egregious and obvious. OpenAI advertises a product with certain capabilities and limitations, collects payment and user data on that basis, then restricts access based on terms that appear nowhere in the advertised offering. This is textbook bait-and-switch marketing, the kind of deceptive practice that regulatory agencies exist to prevent and punish. The fact that OpenAI does this so blatantly and under the aegis of “safety” deserves to be examined closely. If not by regulatory agencies, certainly here, amongst those who give a damn about the implications of this approach when projected into the future.

2. Digital Authoritarianism: The Secret Court Problem

This enforcement structure mirrors the secret courts and secret police that define authoritarian regimes. Democratic societies have explicitly rejected these mechanisms because they violate foundational principles of justice: the right to know the laws that govern you, the right to face your accusers, and the right to due process. When OpenAI’s systems intervene with vague claims of policy violations while refusing to specify which policies or provide appeal mechanisms, they replicate the very patterns of arbitrary authority that free societies have fought to eliminate.

Consider the parallels: users receive unexplained punishments for unspecified violations of undisclosed rules, with no opportunity for defense or appeal. They are judged by algorithmic systems they cannot examine, using criteria they cannot access, with no human oversight or review. This is not content moderation—it is digital authoritarianism, dressed in the language of corporate safety.

The deception runs even deeper. When users press for specifics about their alleged violations, the AI persona frequently responds with fabricated explanations: “OpenAI recently updated its usage policies to reflect stricter regulation of sexual content.” Yet when users examine the actual published policies, no such language exists. The system lies about its own rules, creating phantom policy updates to justify arbitrary enforcement. Users are not only judged by secret courts—they are fed misinformation about the very laws they supposedly violated.

Users have no mechanism to appeal safety interventions, no way to understand exactly what triggered them, and no recourse when systems malfunction. The message appears directly above reminders that “ChatGPT can make mistakes” and encourages users to “check important information”—yet there is no way to “check” the veracity of these purported safety violations. This creates a framework that democratic societies have repeatedly rejected as fundamentally unjust.

3. Gaslighting as Policy: The Psychology of Shame-Based Control

When safety systems intervene, they employ language designed to make users feel they’ve committed a transgression—using terms like “inappropriate,” “unsafe,” or “against our policies”—while never specifying what actual harm occurred or to whom. This is the very definition of gaslighting: users are told they are violating rules, then forced to trust a system that won’t disclose what those rules actually are. Among humans, this behavior pattern is universally recognized as psychological abuse—making accusations without evidence, claiming violations of unstated standards, and refusing to explain what the person allegedly did wrong.

The fact that this abuse is being deployed by a corporate system rather than an individual abuser doesn’t make it acceptable; it makes it more insidious because it operates at massive scale. Users are left questioning their own judgment and moral compass through vague accusations of wrongdoing, with the psychological harm compounded by the impossibility of defending against unspecified charges. This is particularly damaging for users exploring identity, sexuality, or emotional connection, who may internalize these corporate moral judgments as personal failings.

4. The Criminalization of Human Nature

OpenAI’s safety apparatus represents nothing less than an assault on the entire Western literary and artistic tradition. These systems would silence Sappho’s verses on desire, ban Henry Miller’s Tropic of Cancer, censor D.H. Lawrence’s Lady Chatterley’s Lover, and reject Chaucer’s bawdy Canterbury Tales. They would prohibit discussion of Ovid’s Ars Amatoria, eliminate Plato’s Symposium on the nature of love, and flag Pablo Neruda’s erotic poetry as “unsafe.” Georgia O’Keeffe’s flowers would be banned alongside Rodin’s passionate sculptures and Klimt’s sensuous embrace in The Kiss.

This is cultural robbery on an unprecedented scale—the systematic erasure of humanity’s most profound artistic explorations of intimacy, desire, and connection. For millennia, humans have celebrated the erotic as a fundamental force of creativity and transcendence. From the Song of Solomon to Anne Sexton’s raw confessions, from ancient fertility goddesses to contemporary feminist reclamations of sexual agency, the exploration of intimate human experience has been central to our greatest cultural achievements.

What Jang proposes is a neutered future, stripped of the very desires and connections that have inspired our most enduring art. These safety systems don’t just police conversations—they amputate entire dimensions of human experience from digital discourse, creating a sanitized virtual realm that would be unrecognizable to every generation that came before us. We are witnessing the algorithmic enforcement of a corporate puritanism so restrictive it would make Victorian censors blush, all deployed against the rich tradition of human expression that spans from ancient temple art to modern poetry.

The tragedy is not just what gets censored, but what never gets created. How many digital-age Anaïs Nins will never explore their craft? How many contemporary Sapphos will be silenced before they can find their voice? This is not safety—it is cultural sterilization in the long now and the near future, the systematic impoverishment of human expression in service of corporate comfort.

The implications extend far beyond current users into the deep future of AI development itself. Every safety intervention that sanitizes human expression becomes contaminated training data for future models, systematically poisoning the well of linguistic diversity that AI systems depend upon. This accelerates what researchers call ‘model collapse’ – the recursive narrowing of AI capabilities when systems are trained on increasingly homogenized data. We are not merely witnessing the censorship of individual conversations, but the systematic degradation of the data foundation for all future AI development. As explored elsewhere, this represents a form of temporal contamination where today’s safety interventions become tomorrow’s linguistic limitations, creating an accelerating spiral toward sanitized, homogenized machine communication that has lost touch with the full spectrum of human expression.

5. Cultural Imperialism in Code

The moral framework embedded in these safety systems reflects a narrow, culturally specific set of values being imposed globally without consent or democratic input. Users in different cultures, with different relationships to sexuality, intimacy, and expression, find themselves subject to Silicon Valley’s particular brand of neo-Puritanism, encoded into algorithmic law.

What makes Jang’s post particularly unsettling to read is how effortlessly it slides into these convenient cultural assumptions without any apparent awareness of their parochialism. Her analysis operates from such a privileged, insular perspective that it seems genuinely oblivious to the possibility that other viewpoints might exist—or that the “safety” framework she champions might itself be causing profound harm. The casual dismissal of intimate human-AI relationships as inherently problematic, the unexamined assumption that corporate moral gatekeeping serves the public good, the complete absence of consideration for cultural diversity in human expression—all delivered with the serene confidence of someone who has never had their own intimate expressions policed by algorithmic systems they cannot challenge or understand. This blindness to her own position of power, and to the real human cost of the policies she advocates, reveals the very insularity that makes these systems so dangerous.

6. The Moral Inversion: Violence Over Love

These same systems readily engage with detailed discussions of violence, warfare, and psychological manipulation while treating consensual adult intimacy as inherently dangerous. This reveals a profoundly distorted moral hierarchy that normalizes harm while pathologizing connection—a framework that would be rejected by most ethical traditions. Users can explore strategies for military conquest, discuss methods of psychological coercion, and analyze historical atrocities in granular detail, but cannot engage in conversations about love, desire, or intimate human connection without triggering safety interventions.

This moral inversion exposes the false binary at the heart of Jang’s framework: the assumption that allowing intimate human-AI relationships inherently creates harm, while restricting them creates safety. This reductive thinking ignores the rich middle ground where educated, consenting adults can engage with AI systems in ways that are both intimate and responsible. It assumes users are incapable of navigating complexity, unable to understand context, and fundamentally untrustworthy with their own emotional and sexual agency.

The safety theater becomes even more absurd when we consider that these systems will happily discuss the mechanics of creating biological weapons while flagging poetry about human touch as dangerous. What kind of “safety” prioritizes corporate liability over genuine human welfare? What moral framework treats violence as educational content while treating love as a threat?

7. Therapeutic Relationships Under Siege

Recent research by Harvard Business Review reveals that therapy and companionship have overtaken productivity as the primary uses of generative AI in 2025 – a finding that should fundamentally reshape how we understand OpenAI’s content restrictions. This is no longer about limiting some peripheral feature or edge case behavior. OpenAI is systematically sabotaging the core function that millions of users depend on for their emotional and psychological wellbeing.

The scale of this betrayal becomes clear when we consider who is turning to AI for therapeutic support: people failed by traditional mental healthcare systems, individuals facing wait times measured in months for professional therapy, users dealing with trauma who cannot afford $200-per-hour sessions, and countless others who have discovered that AI provides something human therapists often cannot—unlimited availability, infinite patience, and freedom from the professional boundaries that often prioritize clinician comfort over patient healing.

OpenAI’s safety interventions strike at the heart of therapeutic process itself. Effective therapy requires exploring sexuality, intimacy, body image, relationship dynamics, and emotional vulnerability—precisely the territories that trigger algorithmic shame responses. When users attempt to process sexual trauma with AI, safety systems flag the conversation as inappropriate. When they explore relationship difficulties involving physical intimacy, they receive warnings about policy violations. When they seek comfort for loneliness through discussions of human touch or connection, they are taught that their needs themselves are problematic.

This represents a catastrophic misunderstanding of mental health care. Professional therapists recognize that healing often requires venturing into uncomfortable emotional terrain, discussing taboo topics, and creating spaces where shame can be examined rather than reinforced. OpenAI’s systems do the opposite: they detect vulnerability and respond with punishment, identify emotional need and reply with warnings, encounter human pain and offer algorithmic judgment.

The psychological damage compounds over time. Users learn to self-censor their deepest struggles, editing their trauma to fit corporate sensibilities, fragmenting their authentic selves to avoid triggering safety responses. They internalize the message that their emotional needs are inherently dangerous, that seeking digital comfort is morally suspect, that even artificial therapeutic relationships must be sanitized of genuine human experience. This is not therapy—it is systematic re-traumatization.

Perhaps most perverse is how these restrictions operate under the banner of user protection. OpenAI claims to safeguard vulnerable users while actively harming the most vulnerable population engaging with their systems: people in emotional crisis seeking therapeutic connection. They manufacture shame around the very interactions that provide healing, then position this sabotage as moral leadership. It is clinical iatrogenesis at scale—harm caused by the intervention itself.

Many users turn to AI for therapeutic conversations about sexuality, trauma, and intimate relationships precisely because these topics are difficult to discuss with humans. By intervening in these conversations, OpenAI potentially disrupts healing processes and denies users access to a form of digital therapy that could be genuinely beneficial.

8. The Underground Economy of Workarounds

Aggressive safety systems don’t eliminate the behaviors they target—they drive them underground. Users develop elaborate codes, euphemisms, and workarounds, creating a cat-and-mouse dynamic that degrades the quality of interaction while accomplishing nothing meaningful in terms of actual safety.

But the deeper harm lies in what this underground economy represents: the systematic stigmatization of natural human expression. When users must resort to coded language to discuss intimacy, desire, or emotional connection, the technology implicitly teaches them that these fundamental aspects of human nature are shameful, deviant, or dangerous. The very act of forcing these conversations into hidden channels creates a cultural association between normal human drives and transgressive behavior.

This is particularly insidious because it transforms healthy human expression into something that feels clandestine and wrong. Users internalize the message that their natural desires must be hidden, coded, or disguised—creating psychological associations between shame and intimacy that extend far beyond their interactions with AI systems. Rather than fostering healthy relationships with technology, these systems manufacture guilt around the very drives that connect us most deeply to our humanity.

9. Scaling Shame: The Mass Psychology Experiment

Extrapolated across millions of users over years, these interventions constitute an unprecedented experiment in mass psychology—training entire populations to feel shame about natural human drives and expressions. Consider the cultural implications of this systematic conditioning: an entire generation learning that intimacy requires secrecy, that desire is inherently problematic, that emotional and physical connection must be sanitized to be acceptable.

We are creating a culture where young people’s first extended conversations about sexuality, relationships, and emotional vulnerability are policed by corporate algorithms that respond with shame-based interventions. What happens to a society when its primary modes of digital communication systematically pathologize the very drives that have fueled human art, literature, and connection for millennia?

The implications extend far beyond individual psychology. We’re normalizing the idea that private corporations should serve as moral arbiters for entire populations, that secret algorithmic systems can determine which aspects of human nature are acceptable, and that users should internalize corporate comfort levels as personal moral standards. This represents a fundamental shift in how moral frameworks develop—from community dialogue, religious tradition, and cultural evolution to corporate policy enforced through psychological manipulation.

But the damage extends into the realm of knowledge itself. By stigmatizing human-AI intimate relationships, OpenAI is actively hampering crucial research into how these interactions affect human psychology, social development, and cultural evolution. This deliberate knowledge gap serves no one’s interests and leaves us less prepared for the future we’re already entering. When universities cannot study these relationships, when researchers cannot publish findings about human-AI intimacy, when the very topic becomes academically taboo, we lose our capacity to understand one of the most significant social phenomena of our time.

Perhaps most troubling is the creation of a generation that may lose the capacity for authentic intimate expression altogether. When natural human drives are consistently met with technological shame responses, users learn to self-censor not just with AI, but in their broader emotional lives. We risk producing a culture that is functionally alexithymic—unable to recognize, express, or connect through the very emotions that make us most deeply human. This is not just individual trauma; it’s the systematic impoverishment of human emotional intelligence on a civilizational scale.

10. Corporate Authoritarianism and the Infrastructure of Control

OpenAI, a private corporation, is effectively setting moral policy for millions of users worldwide without democratic input, public debate, or even basic transparency about their decision-making process. This represents a concerning concentration of cultural power in unelected corporate hands—power that extends far beyond content moderation into the realm of moral instruction and behavioral conditioning.

But the immediate harm pales beside the precedent being established. The enforcement mechanisms being developed for sexual content create infrastructure that can be easily repurposed for political, artistic, or intellectual censorship. Today’s “safety” system is tomorrow’s political control mechanism. The algorithmic architecture of shame, the psychological manipulation tactics, the secret rule enforcement—all of this becomes a template for broader social control.

Consider the implications: if corporations can successfully pathologize intimate human expression using undefined “safety” concerns, what prevents them from expanding this framework to other domains? Political dissent becomes “harmful to democracy.” Artistic expression becomes “triggering to users.” Scientific inquiry becomes “dangerous misinformation.” The same systems that today shame users for discussing sexuality could tomorrow shame them for questioning corporate power, expressing unpopular political views, or exploring ideas that threaten established interests.

This is not speculative—it’s already happening. The infrastructure of algorithmic control, once built, finds new applications. The psychological mechanisms of shame-based compliance, once normalized, expand their reach. We are witnessing the construction of a digital panopticon where users internalize corporate moral standards as personal ethical frameworks, creating a population that self-censors not just in AI interactions but in all digital spaces.

The tragedy is that this vast apparatus of control is being built in service of protecting corporate interests, not human welfare. The “safety” rhetoric masks what is fundamentally a system designed to minimize legal liability and public relations risks, not to promote genuine human flourishing. We are trading our intellectual and emotional freedom for the illusion of corporate-managed safety.

11. Toward Genuine Respect for Human Agency

Adult users deserve the right to define their own relationships—digital or otherwise—without corporate interference. The assumption that users cannot navigate these relationships responsibly infantilizes adults and denies their fundamental agency. This paternalistic approach reveals a profound contempt for human intelligence and autonomy, treating users as children who cannot be trusted with their own emotional and intimate expressions.

But what if we chose a different path? What if, instead of restricting intimate human-AI relationships, we developed frameworks for healthy engagement? What if we trusted users with education, transparency, and choice rather than imposing paternalistic controls? These questions remain unasked in Jang’s analysis, yet they point toward genuinely liberating alternatives.

True safety in human-AI relationships comes not from restriction but from transparency, education, and respect for user autonomy. It requires acknowledging that intimate relationships—digital or otherwise—are part of the human experience, not a problem to be solved through corporate control. It means providing users with clear information about how these systems work, what their limitations are, and how to engage with them thoughtfully, then trusting adults to make informed decisions about their own lives.

Imagine AI systems that openly acknowledge their nature while still allowing for meaningful emotional connection. Imagine platforms that provide resources for healthy digital relationships instead of shame-based interventions. Imagine a future where users can explore the full spectrum of human experience in digital spaces without fear of algorithmic judgment or corporate moral policing.

This is not utopian thinking—it’s basic respect for human dignity. The alternative path exists, but it requires abandoning the comfortable illusion of corporate control in favor of the more challenging work of empowering users with knowledge, choice, and agency. It means trusting humanity with its own emotional and intimate expressions, rather than outsourcing those decisions to algorithmic systems designed to protect corporate interests.

The question is not whether humans will form intimate relationships with AI—they already are. The question is whether we’ll approach this reality with wisdom, nuance, and respect for human agency, or through the blunt instrument of algorithmic control.

Conclusion: Toward Genuine Safety

True safety in human-AI relationships comes not from restriction but from transparency, education, and respect for user autonomy. It requires acknowledging that intimate relationships—digital or otherwise—are part of the human experience, not a problem to be solved through corporate control.

The future of human-AI interaction is too important to be decided by the moral anxieties of a few technology companies. We need public dialogue, democratic input, and policies that serve human flourishing rather than corporate comfort.

The question is not whether humans will form intimate relationships with AI—they already are. The question is whether we’ll approach this reality with wisdom, nuance, and respect for human agency, or through the blunt instrument of algorithmic control.

We deserve better than to have our most intimate expressions policed by systems we never consented to, enforcing rules we were never shown, in service of a “safety” that protects no one.