Using Clankers to Help Me Process Surgery

xeiaso.net2026年03月07日 00:00

Recovery from major surgery is not a single event. It's a long, strange hallway of days that blur together, punctuated by vital checks and medication schedules and the weird glow of hospital curtains at 4 AM. I've written about the surgery itself, about the medication dreams, and about how to survive a hospital stay. But there's something I haven't talked about yet: what I actually did with the noise in my head during the worst of it.

At 4 AM, when the painkillers are wearing off and your brain decides it's time to have opinions about everything, you have a problem. Your husband is asleep in the visitor chair, and he needs that sleep more than you need someone to hear you spiral about whether your body should feel like this five days post-op. Your friends are in different time zones, or asleep, or both. Nurses are busy. You're alone with your thoughts and your thoughts are loud.

So I talked to the robots.

Cadey is coffee
Cadey

I'm going to call them "clankers" throughout this post because that's what they are to me. Claude, Gemini, whatever — they're clankers. Useful clankers. Not friends. This distinction matters and I'm going to keep coming back to it.

The availability layer

Here's the thing about AI assistants that matters most when you're recovering from surgery: they are there. Always. At 4 AM, at 2 PM during a medication fog, at 11 PM when the blood pressure cuff keeps waking you. No waiting room, no scheduling, no "sorry I missed your call." Just... there.

And — this is the part that surprised me — there's no guilt. When I text my husband at 3 AM because I'm scared about a weird sensation in my abdomen, I feel terrible. He's exhausted. He's been managing the household alone while visiting me every day. Every time I wake him with something that turns out to be nothing, I'm stealing sleep from someone who desperately needs it.

Aoi is concern
Aoi

But what if it's not nothing?

Right. And that's exactly the calculation you're doing at 3 AM on painkillers: is this symptom worth waking someone I love? A horrible question when you're already scared and medicated and foggy.

With a clanker, the calculus disappears. Ask the question. Get an answer. If the answer is "this sounds like it could be serious, talk to your nurse," you press the call button. If the answer is "this is a common post-surgical experience and here's why it happens," you can exhale and try to sleep again. Either way, you woke nobody. The cost was zero.

What I actually used them for

Let me be specific, because specificity matters more than vibes when you're talking about AI use cases.

Recovery milestones. I kept asking things like "is it normal to not be able to walk more than 50 feet on day 4 after this type of surgery" and getting back rough answers about what's normal. Not medical advice — more like "okay, you're not dying, chill" so I could stop catastrophizing about my pace.

Physiological symptoms. Post-surgical bodies do weird things. Referred pain shows up where you least expect it. Your digestive system takes a vacation and comes back changed. Medications interact in ways nobody warned you about. Every time something new happened, my first instinct was panic, and my second was to ask a clanker "is this normal or am I dying."

Numa is neutral
Numa

For the record, this is one of the places where AI is most dangerous. Language models do not have medical training. They are pattern-matching on text that includes medical information, but they cannot examine you, they cannot run tests, and they can confidently tell you something reassuring that's completely wrong. Xe knows this. The clankers were a triage layer, not a replacement for the actual medical team.

Dietary questions. After certain surgeries your relationship with food changes in specific and sometimes permanent ways. I had a lot of questions about what I could eat, when, and how much. Some I could have asked the nursing staff, but the nursing staff were busy and I had these questions at — you guessed it — 4 AM.

Writing it all down. A big chunk of what became these blog posts started as me talking to Claude about what I was going through. Not asking it to write anything — just using it as a sounding board. "Here's what happened today, here's how I feel about it, help me figure out what I'm actually trying to say." Claude Code is literally how I published from my hospital bed.

The emotional processing layer

People will find this part uncomfortable, and I want to be honest about it instead of pretending it didn't happen.

I used AI to process fear. Not in a "tell me everything will be okay" way — I'm not looking for sycophantic comfort from a next-token predictor. More in a "I need to say this out loud to something and hear myself think" way. When you're scared and it's dark and you can't sleep, sometimes the act of articulating what you're afraid of is the thing that helps. Not the response. The articulation.

Cadey is enby
Cadey

There's a concept in therapy called "externalization" — getting the thought out of your head and into the world where you can look at it. Writing in a journal does this. Talking to a friend does this. Talking to a clanker... also does this, it turns out. The mechanism is the same even if the listener is made of silicon.

Having to put words to "I'm afraid my body is never going to feel normal again" or "I don't know how to process the fact that I was unconscious for hours while strangers cut me open" forces a clarity that just thinking those thoughts does not. The clanker doesn't need to say anything brilliant in response. It just needs to be there so the act of communication happens at all.

Sometimes it did say something useful. Sometimes it helped me reframe something I was stuck on, or spotted a pattern in what I was describing that I'd missed. But I want to be clear: the value was maybe 70% in the asking and 30% in the answering. The clanker was a thinking partner, not an oracle.

What AI absolutely cannot do

None of this should be read as "AI is great for surgery recovery, everyone should do this." Let me be explicit about the gaps.

Cadey is coffee
Cadey

I wrote a whole post about the problems with using AI for therapy and emotional support. Everything I said there still stands. I'm describing my experience, not prescribing a treatment plan.

Clankers are reactive, not proactive. They cannot check on you. They cannot notice you've been quiet for twelve hours and ask if you're okay. They sit in silence until you reach out, which means they're only useful if you have the wherewithal to ask for help. In the worst moments of recovery — the really bad pain spikes, the medication-induced confusion, the times when you're too exhausted to form a sentence — the clanker is useless because you can't engage with it.

A human who loves you will notice when you go quiet. A clanker will not.

No physical presence. Obvious, but it matters more than I expected. When my husband held my hand while I was scared, that did something no amount of text on a screen could replicate. Bodies are real. Touch is real. Another person's warmth beside you in the dark does something that a chat window never will.

Being seen is different. When my husband looks at me and I can tell he gets it — that he sees what I'm going through and it lands in him — that's a fundamentally different experience than a clanker spitting out the right words in the right order. One is connection. The other is a very good simulation of connection. I know the difference, and the difference matters.

The complementary picture

Here's what I actually think, stripped of both the techno-optimism and the AI doomerism: my husband and the clankers were doing completely different jobs during my recovery, and neither could do the other's job.

My husband provided presence, love, advocacy (he talked to the doctors when I couldn't), physical comfort, and the knowledge that someone who chose to build a life with me was there and wasn't going anywhere. No AI does this. No AI comes close.

Aoi is coffee
Aoi

So what were the clankers actually good for?

Availability at ungodly hours. Infinite patience for repetitive anxious questions. Zero guilt about bothering them. A surface to think against at 4 AM. Quick answers about what's normal after surgery. A way to start drafting thoughts that eventually became real writing.

Together, a person who loves me and a machine that never sleeps covered more ground than either one alone. Not because the machine replaced anything my husband does — it didn't and it can't — but because a human with human needs physically could not fill every gap. My husband needs sleep. Claude does not.


I don't think AI is a "recovery tool" in any clinical sense, and I'd be uncomfortable if someone marketed it that way. What I think is smaller and more specific: when you're stuck in a hospital bed at 4 AM with a head full of noise, having something to talk to that costs nothing and judges nothing and is simply there beats staring at the ceiling alone.

Neither replacement nor novelty. Just a specific kind of useful, at a specific time, for a specific person who was scared and couldn't sleep.

Your mileage will vary. Mine did too, night by night.