
Want to support Rewskidotcom Substack?
♻️ Share this post with friends
🫰 Become a paid subscriber
📢 Leave a comment below
The Field Journal Problem
I have no idea what her face did when I told her the diagnosis. I was typing.
For years, that was just how I worked. Templates dialed in, hands on the keyboard, eyes on the screen. I got efficient at it, the kind of efficient that looks like competence from the outside. Structured notes. Clean documentation. Inbox cleared. What I didn’t account for was everything happening in the room while I was building the chart.
“Eye contact wasn’t something I avoided on purpose. It was something the workflow made nearly impossible.”
I caught the words — she said “okay” — but the micro-expressions, the shoulder drop, the half-second before her husband reached for her hand? Gone. Not from the chart. From me. I was a foot away and I missed it.
That’s not a documentation failure. It’s a perception failure that documentation caused.
There’s an interesting chapter in the history of archaeology. For most of the discipline’s existence, excavators were required to document everything in real time, sketch the stratum, notate the artifact, record the context, all by the same hands doing the digging. The logic made sense. Who better to document what they found than the person finding it?
The problem was that stopping to write interrupted the act of perceiving. Certain kinds of context, the spatial relationship between objects, the subtle shift in soil composition, the orientation of something before it was disturbed, were evaporating in the transition from eye to journal. The recording was destroying the record.
Photogrammetry changed this. Digital capture tools meant the archaeologist could excavate continuously while a separate system handled documentation. When archaeologists could keep their eyes on the dig, they started finding things they’d been missing for years. Continuous attention turned out to produce richer, more accurate interpretation than the document-then-dig cycle ever had.
The parallel to medicine is close enough to be uncomfortable. But it only gets you partway to what I actually want to say.
The Eye That Moves to the Screen
Every time a physician’s attention leaves a patient to interact with the EHR, clinical signal evaporates. Not from the chart, the chart may be getting richer. But from the encounter itself, from the physician’s real-time interpretive processing.
We’ve known this in a vague, guilty way for years. Studies on declining eye contact. Patient satisfaction curves bending downward alongside EHR adoption. Anecdotes from every doctor who remembers what it felt like to see patients before Meaningful Use turned the computer into a third presence in the room.
What we haven’t done is name the mechanism clearly: the field journal problem. The physician’s diagnostic attention is a non-renewable resource within a single encounter. Every cursor click is an eye movement away from a human being, and every eye movement is interpretive capacity pointed at a screen instead of a patient.
The EHR was never designed around that constraint. It was designed around billing, compliance, and legal documentation. The physician’s attention was assumed to be infinitely divisible.
It isn’t.
The Prediction I Made and Forgot I Made
Years ago, early enough that saying it out loud felt almost naive, I used to tell colleagues that we’d eventually get back to the heyday of medicine. Just talking with your patient. No computer. No typing mid-conversation. No documentation architecture eating the encounter alive. Just your thoughts, the clinical picture forming in real time, the plan, and next steps.
I always believed AI would be what got us there. I just didn’t know when, or what form it would take, or whether I’d still be practicing when it arrived.
It’s here. And I’m not just watching it arrive, I’m actively building it. I won’t say much more than that yet, but I’m working on something for physicians and other clinicians built from the ground up around this exact problem. The excitement I feel isn’t the polished kind you perform at conferences. It’s closer to what medicine felt like before it got so heavy.
But what I’m describing isn’t an ambient scribe. Not even close.
It Starts Before You Walk In the Room
The encounter doesn’t begin when the physician opens the door. It begins the moment the patient schedules, and by the time the clinician walks in, there’s already a story worth knowing.
What’s changed since their last visit. What their connected health record shows across systems. What they flagged in their pre-visit intake that didn’t make it into the chief complaint. What their recent labs suggest about what questions are actually worth asking today.
Most physicians walk into rooms flying partially blind. Not because the data doesn’t exist, but because nobody pulled it together before the encounter started. It’s sitting there scattered across tabs, portals, and external records that require active hunting. So a significant chunk of every visit gets spent on context-gathering that should have already happened.
Surfacing that synthesized picture before the physician sits down changes what the encounter can be. The visit starts at the decision layer rather than the investigation layer. The archaeologist walks onto a site where the relevant stratigraphy has already been mapped.
The Part That Will Make Some Clinicians Uncomfortable
Now for the piece I expect will generate the most pushback: real-time nudges.
The idea that AI might prompt a physician mid-encounter, surface a question worth asking, flag something that doesn’t add up, note a gap between what the patient just said and what their record shows, that feels to a lot of clinicians like surveillance. Like a backseat driver with a medical degree.
I understand that reaction. I’ve had it myself.
But the nudge isn’t telling you what to think. It’s filling the blindspot that opens up when you’re managing a conversation, holding a differential, reading the room, and trying to remember whether you asked about the medication change from three visits ago, all at once. The cognitive load of a real encounter is enormous. Things fall through not because physicians are careless but because no human attention is designed for that many simultaneous inputs.
The best clinicians I know use every available signal. They catch the thing the patient almost said. They notice when the story doesn’t quite hold together. AI that quietly surfaces a clarifying question isn’t replacing that instinct. It’s feeding it.
Think of it less like a prompt and more like a well-briefed colleague sitting just outside the room who passes you a note when something matters. You decide what to do with it. You always did.
The resistance to this usually comes from imagining an AI that overrides clinical judgment. What’s actually being built protects the conditions under which good judgment happens in the first place.
The Full Picture
So here’s what this actually looks like, and why “ambient scribe” doesn’t come close to capturing it.
Before the visit, synthesized context from the patient’s pre-visit intake and their connected health record is surfaced and ready. The clinician walks in already knowing the story.
During the visit, the physician’s eyes stay on the patient. The conversation flows. Quiet intelligence fills blindspots and flags moments worth clarifying while the encounter is captured in full. Not just the words. The thinking, the reasoning, the clinical picture as it forms.
After the visit, the documentation reflects not just what was said but what was decided and why. A note that reads like a physician wrote it, because in every way that matters, one did.
The field journal gets written. The archaeologist never stops digging.
What We’re Actually Getting Back
I’ve written before about the TurboTax moment for healthcare, where AI reduces the friction of an experience that had become genuinely punishing. This goes further.
The joy of medicine was never about the chart. It was the conversation. The reading of a room. The moment when everything clicks and you know exactly what your patient needs and you’re fully there to give it to them.
That moment got buried. Not by bad intentions, but by a system that handed the field journal to the same person doing the dig, then wondered why so many things were getting missed.
Getting it back matters for the clinician who rediscovers why they went into this. It matters just as much for the patient who finally gets a physician whose eyes haven’t left them since they walked through the door.
“The most important artifact is often the one you didn’t see because you were writing down the last one. Physicians have been excavating with one eye closed for thirty years. This isn’t about giving us our time back. It’s about giving us our eyes back, and everything we can finally see when we use them.”
The clinicians I most want to hear from: has a patient ever had to repeat something important because you were mid-documentation when they said it the first time? Leave a comment. I’m collecting these stories for a reason.






Really sounds interesting - and AI is definitely all the rage - BUT ...
"... patient’s pre-visit intake and their connected health record ..."
From my own experience (and managing healthcare aspects of a relative w/ MA), this doesn't exist yet. Not even close.
It's a bouillabaisse of fragmented information from different sources collected at different locations - spanning years - through constantly churning networks and docs.
The "intake" process itself is highly variable from one doc to the next - often missing key info. One intake process (tablet-based) only allowed for 3 prescriptions - and we presented to an ENT after a CT scan (w/ PCP referral) with no results anywhere to be found (and we were asked if we had a CD with the scan/results).
It's the oldest adage in software: GIGO ... AND the lack of coordinated communication WITH real-time data interoperability is the big problem. Eye contact during an encounter w/ AI prompting/aiding? Sure - but on a scale of 1-10 ... ?