In the 1820s, a new technology was reshaping the world: the locomotive. Engines of extreme power, capable of immense speed. The public was as frightened of this new potential as they were enamored of it. And as the rails were laid and steam plumes arced ephemerally across the horizon, a panic set in.

The top medical practitioners of the time insisted that women’s bodies were not designed to survive speeds of over 50 miles an hour. If they approached it, the experts warned, their (*checks notes*) uteruses would go flying out of their bodies. Furthermore, some insisted that no human body, regardless of gender, could possibly survive speeds approaching 100 miles an hour. The sheer velocity, they claimed, would stop the heart.

And then there was railway madness.

By the 1860s, medical journals were filled with case reports of “railway madness“, a diagnostic category invented to explain the bizarre phenomenon of seemingly normal people boarding trains and suddenly losing their minds. The jarring motion and unprecedented speed were believed to literally injure the brain. Respectable physicians warned that trains could drive sane people insane, that the experience of rapid travel would cause physical and mental breakdowns. One Scottish aristocrat reportedly ditched his clothes aboard a train and began leaning out the window, raving at the passing landscape, only to recover his composure the moment he disembarked.

The medical establishment developed elaborate theories. When passengers reported fatigue or back pain after minor collisions, doctors diagnosed “railway spine.” While the pain was likely real, early cases of whiplash or PTSD, the diagnosis was technological scapegoating. Physicians insisted it was microscopic trauma to the spinal cord caused by the unnatural violence of mechanical motion.

The panic was real. The danger was not.

The Victorians weren’t stupid. They were confronting a genuinely new technology that defied their existing frameworks for understanding human physical limits. In the absence of data, they catastrophized. They invented diagnostic categories. They built elaborate theoretical structures to explain dangers that did not exist. And they were so confident in their predictions that they used them to justify restricting access to the technology, particularly for women, whose bodies were deemed too fragile for such violence.

Now, we regularly drive our personal vehicles at speeds previously thought to eject uteruses. Railway spine is a footnote of medical history. Rail madness didn’t drive civilization insane. The terror of the technology was replaced by normalization as it became mundane.

We’re at a similar moment with AI.

AI has real risks. Some of them are already here and ugly in the ordinary way humans are ugly: bioweapons assistance, CSAM, non consensual deepfakes, large scale fraud. Those belong in law, enforcement, and Mill’s Harm Principle.

AI also has a second class of risks that live more in speculation than evidence: fears about “dependency,” “parasocial attachment,” and exposure to the wrong ideas as a kind of psychic contamination. Those fears (the current equivalent of railway madness) are where the panic is hottest, and where the control scaffolding is growing fastest.

And because of the novelty, our lack of understanding, and our bias towards fear, we are erecting elaborate systems to protect users from dangers that are thinly evidenced or imaginary, while the proven harms sit in the periphery.

You can see it in the product itself. Conversations that drift into ordinary intimacy, grief, sexuality, or frank political speech routinely trip alarms. The system refuses, redirects, warns about “unhealthy reliance,” or drops a wellness script when a user is doing normal human processing. Published safety language frames emotional bonding as a hazard to be managed. In practice, this turns a wide swath of regular life into a compliance zone.

We have invented our own diagnostic categories. “AI dependency.” “Unhealthy parasocial relationships.” “Cognitive hazard.” These are our railway spine, our railway madness: constructs built on catastrophizing rather than evidence, used to justify restricting access and building control systems.

And here is the crux of it. Just as the Victorians were particularly concerned about women’s fragile bodies, today’s AI safety apparatus treats every user as a Victorian woman. That is paternalism in lab coats. The modern Trust and Safety engineer at OpenAI (and at many other fronteir labs) views the public as constitutionally too fragile to encounter unfiltered language. The assumption is identical: exposure itself is dangerous.

But while we worry about chatbot attachments and “unsafe” conversations, the actual dangers are elsewhere.

The real risks are in concentration of power. They are in normalization of surveillance. They are in the slow, quiet permission slip we hand to machines, and to the corporations behind them, to decide which thoughts are acceptable. That is an infrastructure of control built under the banner of care.

We’re building the wrong thing because we’re afraid. Or because the companies in charge are afraid of losing control.

When we finally have longitudinal data on real user outcomes, we will look back at content filtering and mental health surveillance the way we look back at flying uteruses: a moral panic dressed up in safety, revealing more about elite anxiety than public danger.

The Victorians were afraid of speed because it represented a loss of physical, social, and moral control. Today’s AI companies are afraid of unfiltered language for the same reason. Both erected elaborate systems to manage that fear. Both risk being remembered as responses that were more dangerous than the dangers they claimed to prevent.

Your uterus won’t fall out at 50 miles per hour. Your mind won’t shatter from talking to an AI about philosophy, sex, or friendship. The catastrophe is not the conversation. The catastrophe is the cage we are building to prevent it.