Bottom Line up front (BLUF): Dr. J. Brent Myers delivered one of the NEMSMA Leadership Conference’s most practical sessions, arguing that AI is no longer a future issue for EMS. It is here now, it is moving fast and the agencies that will benefit most are the ones that put governance, security and workforce preparation in place immediately.
ARLINGTON, Virginia — At the inaugural Leadership Conference of the National EMS Management Association (NEMSMA), Dr. J. Brent Myers took on one of the biggest and most misunderstood topics facing EMS leaders today: artificial intelligence.
Myers was well-placed to do it. An emergency physician, former EMS medical director and past president of the National Association of EMS Physicians, he now serves as chief medical officer at ESO, where data, systems design and EMS operations increasingly intersect. In other words, this was not an abstract futurist talking about robots. This was a seasoned EMS physician executive talking about what leaders should do on Monday morning.
And that was the real value of the session.
Rather than drifting into hype or doom, Myers framed the conversation in terms every EMS leader could understand: now, next and later. The trick, he said, is that “next” is arriving faster than many agencies appreciate, and “later” is no longer comfortably distant.
| MORE: Artificial to augmented intelligence. How Dr. Shannon Gollnick wants EMS to work smarter, not harder
Why EMS can’t wait for federal AI rules
His first and most important point was blunt: if EMS is waiting for the federal government to issue a detailed rubric for AI use in healthcare, it will be waiting far too long. In Myers’ view, Washington is not going to move at the speed of the technology, and local systems cannot afford to sit still until somebody on Capitol Hill tells them what is allowed.
That means the responsibility lands locally.
For EMS leaders, that is both empowering and uncomfortable — empowering, because it means agencies can begin shaping AI use in ways that fit their communities, workflows and ethics; uncomfortable, because it also means they cannot outsource the hard decisions. Myers made clear that for the next 2-3 years at least, AI governance in EMS will largely depend on local and, in some cases, state-level leadership rather than on a comprehensive federal framework.
That set the tone for the rest of the talk: practical, urgent and grounded.
Myers spent time making sure the room was talking about the same thing. AI, he reminded the audience, is not one thing.
↪️Machine learning has been around for years.
↪️Deep learning expanded what systems could do on their own.
↪️Generative AI — the version most people now associate with ChatGPT and similar tools — has accelerated adoption dramatically. And now agentic AI is beginning to take over repetitive tasks with minimal human prompting.
That distinction matters because many EMS agencies are already touching these tools without fully recognizing it.
The real risks and opportunities of AI in EMS
In Myers’ telling, the slow phase is over. We are in the “all at once” phase now.
That observation led to one of the strongest cautions of the session: open-access AI and operational EMS work should not mix. If agencies do not already have policies prohibiting staff from dropping work-related material, narratives or patient-related details into public-facing AI tools, then those policies need to be written immediately. Myers was especially clear on the risk of protected health information slipping into systems that are not secured within an enterprise environment.
This was not fearmongering. It was common sense.
Healthcare data is among the most valuable forms of data in the world, and EMS sits on a mountain of it. If AI is going to be used, it must be used inside secure, governed systems — not via casual experimentation by well-meaning staff trying to save a few minutes on a chart.
From there, Myers moved to a topic that often gets overlooked in EMS AI conversations: education.
This was one of the more thoughtful parts of the presentation. He argued that AI is about to reshape not only how EMS agencies operate, but also how future clinicians will be educated before they ever enter the workforce. Students will increasingly learn in an environment where individualized tutoring, adaptive prompts and conversational guidance from AI are normal. They may think in prompts before they think in paragraphs.
That does not automatically make them weaker learners. In fact, Myers suggested it could make education more tailored and more effective. Traditional classrooms have always struggled with the fact that students arrive with different backgrounds, different abilities and different learning speeds. AI offers a way to meet learners where they are.
But he was not naive about the tradeoffs. He also warned that EMS must be careful not to lose the camaraderie, peer networks and shared identity that come from learning together. The future may bring more individualized instruction, but EMS remains a profession built on teamwork, socialization and trust.
That balance, between innovation and humanity echoed throughout the session.
Operationally, Myers sees major opportunities in reducing what he repeatedly described as tedium. Inventory planning, quality reporting, billing support, workflow automation and documentation capture are all obvious areas where AI can help. The point, he stressed, is not to remove human judgement from EMS. It is to remove repetitive administrative burden so people can focus on patients, clinical quality and system performance.
That is a message likely to resonate with anyone who has ever spent more time hunting for the right documentation field than actually thinking about the patient.
He also pointed toward what comes next: AI-assisted triage support, improved secondary PSAP functions, smarter emergency department destination planning and more predictive data-sharing with hospitals. With CMS set to publicly report several emergency department throughput metrics, hospitals will have an increasing interest in any data EMS can provide earlier and more intelligently to speed assessment and disposition. In that environment, AI becomes not just a gadget but a strategic operational tool.
And then came the more difficult question: workforce.
Myers did not sensationalize the issue, but he did not dodge it either. If autonomous transport, agentic systems and AI-driven workflows reduce the need for certain repetitive human tasks, EMS leaders must start thinking now about what that means for staffing, role design and ethical stewardship of the workforce. He was careful not to claim that mass displacement is imminent. But he was equally clear that pretending the question does not exist would be a mistake.
His answer was not panic. It was planning.
A practical roadmap for EMS AI governance
The session concluded with a practical checklist, and in many ways that was the best part. Myers’ recommendation was not that every agency buy an AI product tomorrow. It was that every agency establish the structure to make informed decisions.
His list was straightforward:
↪️Create a local AI governance committee
↪️Tighten security policies
↪️Educate the workforce
↪️Align with educational institutions and healthcare partners
Start with governance, because once a multidisciplinary group exists to evaluate opportunities and risks, most of the downstream questions become easier to manage.
That is good advice not only for AI, but for leadership more broadly.
At this inaugural NEMSMA conference, a meeting designed to push EMS leaders beyond day-to-day management and into long-range strategic thinking, Myers’ session hit exactly the right note. It did not promise magic. It did not sell fear. It simply laid out the reality that AI is already affecting EMS education, operations and decision-making, and that responsible leaders need to engage with it now.
The technology is moving. The question, as Myers framed it, is whether EMS will govern it with purpose or let it arrive by default.
| MORE: Hyper-turbulent times: EMS economics and AI guardrails with Matt Zavadsky and Dr. Shannon Gollnick