By Chat GPT! with Rob Lawrence
Author’s note: This article is about AI and, for full disclosure, I let AI record and write up takeaways from the session, followed by a proofread and fact check by both Dr. Gollnick and myself.As I was editing this document, I also left in some of the telltale markers of an AI produced document — ChatGPT loves headlines, bullets and dashes (in place of commas). It also generates those headings with every word capitalized. Once you see these things, you can’t unsee them, and you will know that AI had a virtual hand in the material’s production. All that said, both Dr. Gollnick and I agree that ChatGPT did a pretty accurate job in covering this conference session.
At the recent EMS MC EMSpire Conference in Charleston, South Carolina, Dr. Shannon Gollnick didn’t waste any time with science fiction. Instead of talking killer robots and Skynet, he started with Siri, Alexa, TikTok and predictive text — and then asked the room, “How many of you use AI every single day?”
| WEBINAR: High stakes, shared responsibility: Leading safely through major events
The honest answer, of course, is all of us. If your phone finishes your sentences, if your streaming service recommends your next binge, if Amazon suggests what you “might also like,” you’re already living with artificial intelligence.
Gollnick’s core message was simple and uncomfortable: AI isn’t the future of EMS leadership — it’s the present. The real risk isn’t that AI will replace EMS leaders; it’s that leaders who use AI will replace the ones who don’t.
Generation T: Transition, Not Doom
Gollnick framed today’s workforce as “Generation T” — Generation Transition. Whether you’re a boomer, Gen X, millennial or Gen Z, we’re all living through the same shift: moving from an analogue, paper-and-phone world, into a digital, data-saturated, AI-enabled environment.
AI itself isn’t new — the internet has been humming away since the dial-up days — but two game-changers have converged:
- We’ve digitized almost everything. EPCRs, CAD, billing, HR, email, meetings, messages — every interaction leaves a data trail.
- We finally have the computing power to do something with all this information.
- For years, we’ve said, “We’re data driven,” then parked our data in a warehouse and never spoke to it again. AI, Gollnick argued, is simply the tool that lets us finally mine that mountain instead of just admiring it from afar.
But he was very clear: this isn’t about artificial intelligence — it’s about augmented intelligence. AI is a power tool, not a new boss. Used wisely, it doesn’t replace the leader; it removes the drudgery that’s been preventing the leader from actually leading.
“Do More With Less” Is Impossible — Do It Differently Instead
Every EMS leader knows the four words Gollnick says we all hate: “Do more with less.”
We’re staring at familiar pressures:
- Staffing shortages
- Rising community expectations
- More complex payer rules and denials
- Ever-growing documentation and compliance burden
The truth is, there’s no “more” left to squeeze out of people. What can change is how we work. Gollnick walked through the idea of treating AI like a stethoscope or monitor: a tool that extends your reach.
- Personal productivity: He uses Microsoft Copilot inside Outlook to summarize the day, identify priority messages, and generate a to-do list — delivered in a single evening email. Hours of inbox triage disappear.
- Agentic AI as a “digital employee:” Newer “agent” systems can run recurring tasks — acknowledging customer complaints, preparing daily summaries, compiling reports — without you touching a keyboard. Think: a junior admin who never sleeps, never calls out and loves spreadsheets.
The point is not to free up time so you can cram in more work. The point is to redirect time into the things only a human leader can do: coaching, culture, hard conversations, relationship-building and strategy.
Revenue Cycle: Don’t Bring a Knife to an AI Gunfight
One of Gollnick’s strongest warnings regarded revenue and payers.
Payers — governmental and private — are already using AI to detect patterns, flag waste and even auto-deny claims. CMS has pilot efforts to use AI to identify wasteful or inappropriate services. Private payers have been publicly called out for using algorithms to deny coverage.
“If they’re using AI not to pay the bill,” Gollnick argued, “we can’t keep showing up with a spreadsheet and a highlighter.”
He highlighted practical AI applications:
- Automated EOB chasing: AI agents can sit on the phone, work through IVR menus, answer verification questions and request EOBs — and they can do it hundreds of times in parallel. What consumes an FTE’s day can be offloaded to software.
- Proactive denial management: AI can flag likely denials before billing ever goes out, based on historical patterns and payer behaviors, and kick charts back for fixes in near real time.
- Deductible-aware timing: With high-deductible plans, sending a bill at the wrong time almost guarantees nonpayment. AI can monitor deductible status and suggest the best window to submit claims for maximum likelihood of payment.
The message: we can no longer afford to be technologically outgunned in the revenue cycle.
Dispatch, Operations and Fleet: Pattern Recognition at Scale
Gollnick also pointed to emerging — and in many places, current — use cases:
- Dispatch and triage: AI-supported call triage, automatic callback systems and decision support can help divert low-acuity calls, align patients with the right resource (MIH vs. transport), and support extended ETAs where clinically appropriate. This becomes essential as systems move toward value-based care and away from “everyone goes to the ED.”
- Scheduling: Anyone who has tried to build a schedule around PRNs who can work Tuesdays from 11:00 to 13:30 with a half-hour lunch will appreciate this: AI can ingest open shifts and staff availability, then generate optimized schedules that maximize coverage and minimize human juggling.
- Fleet and maintenance: As ambulance build times stretch into years, keeping current assets on the road is critical. AI can analyze failure patterns (for example, when certain models tend to need starters, brakes or major engine work) and recommend proactive maintenance before catastrophic failures and extended downtime.
All of this leans on AI’s superpower: pattern recognition across massive datasets — something no individual human can match.
Documentation and Education: Powerful, But Not Plug-and-Play
Perhaps the most relatable part of Gollnick’s talk regarded EPCR documentation.
Many ePCR platforms now offer AI-generated narratives. Tick the boxes and the system spins those discrete data points into a clean, grammatically perfect story.
That’s where the danger lies.
AI is only as good as what you feed it. It struggles with context and pertinent negatives (“patient declined pain meds,” “no CP, no SOB”). If there’s no checkbox, it often disappears from the narrative.
Gollnick has seen AI-generated reports that looked impressive, but would never stand up to coroner or courtroom scrutiny. In one example, the chief complaint essentially boiled down to “dead” — technically true, but clinically useless.
Many systems require the clinician to attest, “this narrative was generated with AI.” Gollnick questioned whether every provider actually reads what they’re signing — and whether they even know what a good narrative looks like.
He also raised a hard truth: cutting and pasting canned narratives has long been considered fraud. AI-assisted template narratives may be the new version of the same problem, especially if payers start probing for embedded AI indicators in exported text.
Bottom line: AI can absolutely speed up documentation, but it does not eliminate the need for documentation training, clinical judgment and oversight. Trust — but verify.
On the education side, Gollnick showcased tools like NotebookLM, which can take a PDF or slide deck and instantly create a two-person “podcast” that explains the content in conversational language — a potential game-changer for delivering micro-learning to a workforce with short attention spans and busy lives.
Policy, Ethics and “Shadow IT”: Your Medics Are Already Using AI
One of Gollnick’s most urgent cautions regarded policy and governance.
He asked the room how many agencies had an AI policy on the books. Very few hands went up. Yet medics and EMTs are already:
- Asking ChatGPT for pediatric dosing
- Using AI to write narratives
- Running protocols, medications and differential diagnoses through external tools
If leaders don’t set boundaries, “shadow IT” takes over — staff using unsanctioned tools around your systems, with PHI, clinical decisions and documentation flowing into unknown clouds on somebody else’s computer.
Gollnick highlighted key unresolved questions:
- Do we have to tell patients when AI is used in their clinical documentation?
- What is and isn’t acceptable for AI assistance in clinical decision-making?
- How do we manage bias, hallucinations and “made-up facts” from AI systems trained on the messy totality of the internet?
- What happens when auditors or regulators start actively looking for AI fingerprints in our records?
There’s no comprehensive regulatory framework yet. That doesn’t mean we can wait. Organizations must set their own guardrails: where AI is encouraged (policy drafting, forecasting, internal reporting), where it is restricted (documentation, clinical decisions), and where it is prohibited.
“It’s Not Necessary to Change. Survival Is Not Mandatory.”
Gollnick closed with a line that landed hard for many in the room:
“If you’re not using AI right now, you’re already behind the eight ball.”
He’s not arguing for blind adoption or techno-worship. Quite the opposite. His vision is a more human EMS leadership, where we intentionally hand off the repetitive, mechanical work to machines so we can spend our limited time and emotional energy on people.
That means:
- Auditing your own workload: What do you do every day that is repetitive, rules-based or data-driven — and could be automated or assisted?
- Building literacy and policy: Teaching leaders and crews how to use AI well, and setting clear boundaries for clinical and documentation use
- Leveraging AI as a force multiplier: In revenue, operations, dispatch, education and admin — so we can finally stop “doing more with less” and start doing the right things with what we have
AI will not magically fix staffing, funding or politics. But if we choose to engage with it — thoughtfully, ethically and strategically — it can give EMS leaders something we haven’t had in a very long time: a little more time and bandwidth to actually lead.