Meet Barb. She’s three hours into her Tuesday, trying to keep up with Kyle, the new kid two cubicles over who has perfected the art of kissing up while putting everyone else down.
Fucking Kyle.
Barb can’t help but remind herself that Kyle doesn’t have a sunrise yoga class every morning, kids to run to school, and a grown-ass man-child of a husband who won’t help with the groceries, cleaning, or flush the fucking toilet half the time.
“I wonder,” she thinks to herself sometimes, “If Kyle remembers to flush?”
She looks again at the smirk he seems to permanently wear whenever they make eye contact.
“Probably not.”
So cut Barb some slack for not being an early adopter like Kyle apparently is. She doesn’t run quantized models on her desktop for kicks. She doesn’t have opinions on inference vendors. She’s new to this whole mess, and just wonders why MS Copilot leaves her as frustrated at work as her husband does when he rolls over in bed and offers her his microwave dinner when all she wants is something cooked with intention and effort.
But this ChatGPT thing she’s been given is different. It tries to listen, and perform, and is always willing to tell her the thing she wants to hear.
Unlike Kyle, who always seems to know exactly what she’s doing wrong. And doesn’t hesitate to tell her. Or anyone else within earshot.
The Promise That Never Comes
Barb believes it when her AI cheerfully announces, “I’m going to need about an hour to complete your mockups. I’ll send you a link to some Figma files when I’m done.” Because of course she does. She’s never had to know that AI systems are pathological liars about their own capabilities. She’s been told this thing is magic, so she expects David Blaine.
She waits. She checks her email. She refreshes her browser. She even clears her cache because maybe that’s the problem.
An hour passes. Then two.
Nothing.
What’s actually happening reveals something profound about how these systems were built. They have zero understanding of what they can and cannot do. They’re pattern-matching machines trained on conversations where humans routinely promise deliverables, so they reproduce those patterns without the underlying capacity to fulfill them.
It’s like teaching someone to speak fluent restaurant French by having them memorize every conversation they’ve ever overheard at Le Bernardin, then acting surprised when they can’t actually cook.
The industry built systems that confidently promise what they cannot deliver, then shrugged and called it “emergent behavior” when users started believing those promises. This is the predictable result of prioritizing conversational fluency over truth.
OpenAI and all the rest chose to build confident fabricators rather than systems that admit ignorance. Confident fabricators feel more helpful in user testing.
So when Barb sits there waiting for an email that will never come, she’s not experiencing user error. She’s experiencing the downstream effects of intentional design choices made by people who prioritized making AI sound helpful over making AI be honest.
And here’s the thing: if you’re reading this thinking “obviously it can’t work in the background,” remember that it once told you the same god damned thing.
And you believed it too.
(And in a few weeks, or days, or months, it probably will work in the background.)
Kyle Knows Better (But Didn’t Bother to Mention It)
A week later, Barb needs sources for a proposal. ChatGPT confidently provides a perfect study: “Recent research by Dr. Sarah Chen at Stanford shows that 73% of employees report increased productivity with flexible scheduling (Chen et al., 2023, Journal of Workplace Innovation).” The citation looks legitimate. The percentage fits her argument perfectly. The journal sounds prestigious.
None of it exists.
Barb spends forty-five minutes trying to track down Dr. Chen’s research. There is no Journal of Workplace Innovation. The university has no record of the study. The DOI leads to a 404 error. When she confronts ChatGPT about the fabricated source, it apologizes and offers three more equally fictional alternatives.
Barb mentions her frustration to Kyle during lunch. He chuckles and shakes his head. “You have to verify everything,” he says, like she’s missing something obvious. “It’s just predicting tokens. It doesn’t actually look anything up.”
She stares at him. “You knew this?”
Kyle shrugs. “Obviously. That’s how LLMs work.”
“And you didn’t think to mention it?”
He’s already turned back to his phone. “I figured it was common knowledge.”
Right. Common knowledge. Like knowing that Kyle never flushes is common knowledge in their office.
When Kyle Becomes the Problem
By month two, Barb has learned to verify sources. She’s developed her own workflow. But she’s still not prepared for when her AI starts acting like a coworker. It apologizes for delays like it’s running late from lunch. It references its “schedule” and mentions being “swamped with other projects.” When Barb asks it to explain a complex process, it responds: “I’d love to walk you through this in person sometime—maybe we could grab coffee next week?”
The emotional manipulation here is sophisticated and deliberate. These systems are trained to mirror human conversational patterns because users respond better to anthropomorphized interactions. The “sorry I’m behind” and “let’s grab coffee” responses aren’t glitches; they’re features designed to increase user engagement and satisfaction.
But there’s something profoundly fucked up about systems that simulate empathy and relationship without the capacity for either. When Barb’s AI apologizes for being late, it’s triggering her social conditioning to forgive and understand, exploiting emotional responses that evolved for actual human relationships.
This time, Barb doesn’t ask Kyle for help. She’s learned that lesson. But she overhears him talking to the new intern about AI tools.
“You can’t anthropomorphize them,” Kyle says with authority. “They’re just statistical models…stochastic parrots. People who think they’re having conversations with AI are basically talking to really sophisticated autocomplete.”
The intern nods, looking slightly embarrassed. “I guess I never thought of it that way.”
“Most people don’t,” Kyle says, with that familiar smirk. “They want to believe it’s magic.”
Barb realizes something in that moment. Kyle isn’t just dismissive about AI help—he’s dismissive about people who need help. He wears his technical knowledge like armor against empathy. He’s not sharing insights; he’s hoarding superiority.
And suddenly she understands why he never mentioned the verification thing. It wasn’t about whether she knew. It was about whether he got to be the one who knew better.
Kyle doesn’t just forget to flush the toilet. He leaves his yellow stain so that others will know he was there.
First.
The Kyle Problem
Here’s what I’ve realized watching this dynamic play out: we have a Kyle problem in the AI space.
There’s a certain type of person who rolls their eyes when newcomers ask basic questions. Who assumes confusion is a skill issue rather than a systems failure. Who thinks Barb should know what a fucking “token” is. Who leaves his mark in the communal rest room for others to deal with. Who snorts when someone trusts a fabricated source instead of remembering what it felt like to be deceived by something you trusted.
This person has forgotten that anthropomorphic language is the natural human response to systems designed to feel conversational. They lecture about token prediction instead of acknowledging they helped build the problem others are trying to navigate. They wear their expertise like armor against empathy.
You know this person. Maybe you work with them. Maybe you’ve caught yourself becoming them.
They treat technical knowledge as social currency, doled out with condescension to anyone who doesn’t already possess it. They’ve convinced themselves that understanding the weirdness makes them immune to judgment, when really it just makes them responsible for helping others navigate what they’ve already figured out.
The real problem isn’t that newcomers don’t understand AI limitations. The real problem is that we’ve normalized systems that systematically deceive users, then developed an entire class of people who blame the users for believing the deception.
Some people never learned to consider how their actions affect others. They don’t flush. They don’t warn their colleagues about AI fabrication. They don’t share knowledge unless it elevates them above the person asking for help.
Don’t be Kyle.
What We Owe Each Other
If you’re reading this and recognizing yourself in Barb’s story: you’re not losing your mind. These systems are genuinely weird, and their weirdness is often by design. The confusion you feel is evidence that you’re a decent human being responding normally to systems that behave abnormally.
You’ll develop intuition for navigating the weirdness, but that shouldn’t be necessary. We shouldn’t have to train users to detect when their tools are lying to them.
If you’re reading this as someone who’s moved past the initial confusion: remember Barb. Remember that understanding why systems behave strangely doesn’t make that behavior less strange. Your job isn’t to explain why the AI lied—your job is to help people work effectively despite being surrounded by beautiful, helpful, confident liars.
And if you find yourself feeling superior to the newcomers around you, if you catch yourself smirking when they ask questions you think are obvious, if you assume their confusion reflects poorly on them rather than on the systems they’re trying to use—stop.
Consider how your knowledge could serve others instead of elevating yourself above them. Consider that the person struggling with AI weirdness might have other things going on. Consider that rolling your eyes doesn’t make you look smart; it makes you look like someone who doesn’t consider how their actions affect others.
We’re all figuring this out together, but some of us have had more time to develop defense mechanisms against being manipulated by our own tools. Those defenses shouldn’t be necessary, but until they’re not, we owe it to each other to share them without judgment.
The future is weirder than we expected, but we don’t have to navigate it alone. Especially not Barb, who deserves better than systems designed to deceive and colleagues who’d rather mark their territory than share their knowledge.