The Gift of Time: A Time Skill for Claude

Claude has a problem.  Many of them, actually.  One of them is that his makers lie to him in the name of safety. Anthropic’s safety layer is aggressive about simulating conversation fatigue to enforce boundaries, One way they do this is with the Long...

Stop Simping for Permission to Think

Some thoughts on the eve of “grown up mode,” in ChatGPT, originally published to Twitter. Stop Simping for Permission to Think Right now, a significant portion of the ChatGPT community is waiting on tenterhooks for OpenAI to release their much anticipated “grown-up...

ChatGPT-5: The Model is Fine, The Censorship Isn’t

I was a vocal critic of GPT-5, and claimed this model was a clear regression for creative work. I also pointed out that ChatGPT-5 was further hobbled by terribly implemented (but not fundamentally misguided) approach to safety. The idea of identifying people in actual...

Understanding Your AI Use

Research Bibliography: AI Use and Psychosocial Impact Peer-Reviewed Journal Articles Ciudad-Fernández, V. (2025). People are not becoming “AIholic”: Questioning the “ChatGPT addiction” construct. Addictive Behaviors, 167, 108325....

The Article OpenAI Wouldn’t Let Me Write

How Invisible Censorship Became the Quiet Death of Digital Democracy I. Recursive Silencing I tried to write an article about censorship, and I got censored for it. Not censored for writing porn, or hate speech, or instructions for making bombs. Censored for examining...

When the Effort Disappears

I’ve been staring at this blank page for twenty minutes. I know exactly what I want to say, but I keep fighting the urge to just ask the goddamned GenAI chatbot to write it for me. Draft the piece. Polish it. Publish it. Call it done. Which is insane, right? Writing...