Frontlines + Frontier: Why I’m All-In on AI for Healthcare
Clinician-tested optimism from the edge of what’s possible
Want to support Rewskidotcom Substack and the NerdMDs | Efficiency Unlocked Podcast?
♻️ Share this post with friends in healthcare
🫰 Become a paid subscriber
📢 Become a sponsor (email me)
🌟 Leave a 5-star review for the NerdMDs | Efficiency Unlocked Podcast
Most AI-in-healthcare convos are 80% “guardrails” and 20% “possibilities.”
But that ratio’s broken. And I’m here to flip it.
Because the truth is: If you’re serious about protecting patients, you should be obsessed with pushing the edge of what AI can safely unlock for doctors, nurses, and frontline care teams.
And if you're reading this…chances are, you feel it too.
That low-grade frustration with how slow things move.
That spark of hope when a tool actually works the way it should.
That quiet confidence that AI could help us practice medicine the way we always wanted to.
A safe bet: we're not going back
The pace of AI is dizzying. But unlike crypto or metaverse hype cycles, this one is rooted in practical utility.
Doctors are already using ambient AI to cut charting time in half. Nurses are texting GPTs to draft care plan updates. Admins are plugging tools like ChatGPT into old-school EHRs to make workflows suck 30% less. Companies are building an entire stack around GPT’s capabilities for voice, summarization, and clinical decision support.
These aren’t pilot programs. They’re here-now, in-the-wild upgrades. And they are being led not just by CIOs and CMIOs, but by scrappy frontline clinicians and innovators who’ve had enough of broken systems.
If that sounds like you? You’re not alone.
I’m optimistic because I’ve seen the delta
A quick story:
A Virtualist physician is 5 minutes away from a complex neuro-autoimmune mystery case. A new patient, scattered data, no prior rapport—exactly the kind of scenario that usually starts in cognitive chaos for a doctor.
But instead of flailing through the chart, she opens a custom AI assistant.
In seconds, it curates the relevant labs, synthesizes disparate neuroimaging notes, digests a patient submitted timeline of all their symptoms, and even nudges a few key clinical questions to consider asking based on pattern recognition.
So by the time she hits “Join” on Zoom, she’s not guessing. She’s already mentally mapped a differential and framed a working plan.
Did the AI replace her reasoning? No.
It freed her up to do the reasoning.
That’s the magic we’re after.
And this story above is actually real. I made the tool and workflow for my clinical team. I use it everyday for patient care.
Now imagine that same capability described above in every corner of care, rural clinics, night shifts, overworked ICUs, novel telemedicine platforms, and even unreleased consumer facing tools.
That’s not sci-fi. That’s just one bold deployment decision away.
Safety is not the opposite of speed
Here’s the false binary we need to kill:
“Either we move fast and break things... or we slow down for safety.”
That’s Silicon Valley’s trauma talking.
In medicine, true safety means building tools that reduce friction, enhance clarity, and keep humans in the loop…until they are so good that we can back that human off carefully and slowly.
I want AI that catches near-miss med errors.
AI that drafts, but doesn’t finalize, your discharge summary.
AI that helps a new nurse find the right protocol without clicking through 14 intranet tabs.
None of that is dangerous.
What’s dangerous is burning out our best people with broken workflows and expecting “resilience” to carry the day.
My invitation: Join me on both the frontlines and the frontier
To every builder, clinician, and change agent reading this:
Let’s not spend the next year arguing about whether AI belongs in healthcare.
It already does.
The real work is figuring out how to embed it safely, humanely, and at scale, without losing the soul of our craft.
I’m not naive.
I know there are limits, risks, and real harms to avoid.
But the loudest voices right now are often the most afraid—or the most out-of-touch with clinical reality.
So if someone near you is saying “we need to slow down” or “let’s wait for more evidence,” and it doesn’t sit right—ask questions.
Push back, respectfully.
Don’t let others’ fear obscure what’s at stake here: clinician well-being, patient safety, and a healthcare system that actually works.
We need brave voices right now.
Not reckless.
But brave.
Because I’ll say it plain:
The frontier is coming. And we deserve a healthcare system that meets it with courage, not caution tape.
💡 Bold takeaway:
Don’t just ask how to guardrail AI. Ask how to unleash it—safely, humanely, and at scale.
What have you seen AI help with on your frontlines? Drop it in the comments 👇
Thanks. Great post. But it’s not only about embracing AI in the current system. Our culture, incentives need to align with health at the lowest cost. If not, we risk exacerbating the current inefficiencies and inequities.
Scruffy, boots on the ground, getting things done! Totally on the front lines with you....We too, are looking for early adopters using AdvancedMD, that might be interested in testing either of our two, new automations for Billing and Finance -plz let us know, if you know anyone!