I’ve been staring at this blank page for twenty minutes.
I know exactly what I want to say, but I keep fighting the urge to just ask the goddamned GenAI chatbot to write it for me.
Draft the piece.
Polish it.
Publish it.
Call it done.
Which is insane, right? Writing about how AI tools make us stop trying… by using an AI tool to avoid trying?
But that’s the thing that’s been crawling under my skin lately. This slow disappearance of the will to struggle with anything.
They told us AI would make us more productive. And it has. I can draft faster, edit cleaner, generate ideas when I’m completely stuck. But somewhere in all that efficiency, I started noticing something else disappearing.
The soreness.
I don’t have that beat-up feeling anymore. That feeling I used to get after wrestling with something that wouldn’t bend. When I’d been getting hatefucked by another empty page for hours and I was bruised and exhausted, but also *present*. Like I’d actually done something real instead of just managing prompts and outputs.
I realized I haven’t felt that in months. And it’s scaring the hell out of me.
Every time I hit resistance, I reach for a tool. Every time something gets hard, I find a way to make it easier.
And I’m starting to wonder if that’s making me…softer. Trying to kill me not in the way that good writing…hard writing…tries to kill its author, but in the permanent ways that leave me not only less bruised and battered…but less capable.
Because if writing isn’t killing you a little bit every time, what is it doing?
There’s this study that came out recently that stopped me cold. Researchers looked at what happens when people use generative AI to complete tasks. Performance goes up—no surprise there. But motivation to keep improving? It crashes.
People get their answer and stop asking questions. They get competent at using the tools but stop caring about getting better at the actual work.
The illusion of competence replaces the desire for mastery.
And I thought, Jesus, that’s exactly what I’ve been feeling. This weird emptiness after completing things. Like I’d eaten the memory of a meal and wondered why it had left me hungry.
Because I haven’t been doing the work. I’ve been managing outputs.
This feels different from every other technological shift I’ve lived through. When calculators showed up, some people stopped learning arithmetic. When GPS arrived, some of us forgot how to read maps. But those were specific skills, specific domains.
LLMs operate everywhere that matters. Text, code, analysis, creativity, problem-solving. So what’s at risk isn’t just technique.
It’s the part of you that gets built by doing hard things.
We don’t just stop doing the work. We stop becoming the kind of people who could do the work if the machines disappeared tomorrow.
And maybe that would be fine if the machines were perfect. But they’re not. They’re confident as hell, though. They never say “I’m not sure” or “let me think about that” or “this might be wrong.” They just deliver answers with the same unwavering certainty whether they’re explaining quantum physics or making up complete bullshit about topics they know nothing about.
And we’re starting to sound like them. All confidence, no visible process. No admission that we might not know what we’re talking about.
I keep thinking about what effort actually does to you.
It’s not just about getting things done. It’s about what the struggle builds in you while you’re fighting it. And I mean struggle – the kind where you want to throw your computer through the fucking wall because you can feel in your bones that there’s a better way to say something but you can’t find it. When you know the perfect word exists but it’s running from you and flipping you off at the same time. When you’ve been staring at the same paragraph for two hours and it’s still not right and you’re starting to question whether you can actually write at all anymore.
That’s not pleasant, but something gets built in you during those fights. Not just patience – though yeah, that too. But judgment. The ability to sit with not knowing and claw your way toward knowing. The capacity to fail and try again and fail better and keep going anyway because you can taste what you’re reaching for even if you can’t grasp it yet.
You can’t shortcut your way into that. You have to live through it.
But when AI handles all the friction and you just approve the results – what’s left? You get the output without the formation. The answer without the struggle that would have taught you how to find answers.
And I wonder if we’re accidentally raising a generation that’s never learned to sit with confusion. That’s never built the muscle memory of figuring hard things out from scratch. That doesn’t carry the bruises from trying. That doesn’t know the anger in failing to deliver. Nor the pull to try again once you’ve known the taste of success.
What happens to curiosity when Google gives you facts and ChatGPT gives you explanations and Claude gives you analysis, all delivered with perfect confidence?
What happens to resilience when you never have to push through not knowing to get to knowing?
Maybe I’m overreacting. Maybe this is just what technological progress looks like and I’m being nostalgic about unnecessary suffering.
But then I think about the things I’m actually proud of in my life: The work that changed me. The problems I solved even when they left marks on me.
None of them were efficient. None of them were smooth. They all involved way more confusion and failure and trying again than seemed reasonable at the time.
And I’m not sure you can get that experience through prompt engineering.
Here’s what’s really bothering me: we’re framing this whole thing wrong.
When we tell people “use AI to work faster,” we’re basically saying their skills don’t matter anymore. The craft they spent years developing? The judgment they built through experience? It all just got automated.
No wonder motivation dies. It fades like bruises, wanders away like ideas no longer reached for.
But what if we asked different questions? What if instead of “how can AI make you more efficient?” we asked “what becomes possible now that you have this in your toolkit?”
What if instead of replacement, we talked about amplification?
Because the people I know who are using these tools in ways that feel alive – they’re not asking AI to do their thinking for them. They’re using them to think bigger thoughts. To explore ideas they wouldn’t have had the time or energy to pursue before.
They didn’t shrug off the hard work. They bent into it and sought to understand how hard they could push before the tool broke. Or they did.
If you’re not ready to do that, what’s the point? Sleepwalking your way through replacement?
I don’t know how this ends. Maybe we figure out how to keep the struggle while gaining the efficiency. Maybe we learn to use these tools in ways that build us up instead of hollowing us out.
Or maybe we sleepwalk into a world where everything is easier and nothing is earned and we wonder why nothing feels meaningful anymore.
I’m not ready to find out which version of the future we’re headed for.
So I’m paying attention. To the difference between using AI to see how far we can push the frontier forward, versus settling for the status quo and letting it slowly excise and replace all the bits of me that matter. And god damnit, I’m paying attention to the moments where I still know how to actually wrestle with ideas and get beaten up by them, rather than just dutifully polishing outputs.
And I’m wondering whether I’m still capable of doing the kind of work that hurts a little or scars deeply. The kind that makes me better while I’m doing it.
Because with so much uncertainty packed into this moment, there’s one thing I’m certain of: what we choose to struggle with now determines what we’ll be later.
And I’m still trying to figure out what that means.