Before we get to EMS, let’s talk sports. It doesn’t matter which one.
What I like best about sports is there’s always an outcome. We can debate strengths, weaknesses, trends and intangibles, but games end with each of us being right or wrong. Final scores have no gray areas.
EMS outcomes aren’t always as clear. Our interventions can be “not wrong” without necessarily being right. Consider the patient suffering from an esophageal spasm: a caregiver who suspects myocardial ischemia and administers nitroglycerine might resolve the presenting problem without recognizing the etiology. Is that a “win”?
What about the paramedic who doesn’t know why a hemodynamically stable patient presents with altered mental status, but hustles the patient to a hospital while monitoring vitals? If the patient doesn’t get any worse, was the medic right? What if the AMS was treatable within the provider’s scope of practice?
To evaluate performance in an environment as complex and equivocal as EMS, we need a system that is both flexible and definitive; one that can highlight blatant treatment errors in some situations while acknowledging uncertainty in others.
Such tools exist — scoreboards of a sort that let us know how we did in the field. However, managing these sophisticated systems in an industry burdened by unclear priorities and disparate data is a challenge not yet met by the majority of 9-1-1 agencies.
Field-level facts and figures
EMS may not be a statistician’s dream like baseball or football, but there are still lots of ways to measure what we do prehospitally. The question is, what results matter most — not to agencies, but to patients?
Consider response times. Arriving on scene as quickly as possible is a notion that still resonates with the public, even if caregivers know minutes saved en route often do more to elevate risks to responders than to improve patient outcomes. Besides, response times alone don’t tell us anything about how effectively we treat people.
What about skill-oriented statistics like successful IVs per attempts? It would be hard to argue the benefits of having a medication route established sooner rather than later, but neither speed nor the skill itself necessarily leads to healthier patients.
Comparisons between indicated and administered therapeutics are becoming popular. Questions like “Did the normotensive cardiac patient with crushing chest pain get nitro?” certainly seem worth asking, but the answer would leave a big gap in our evaluation of the crew’s performance, unless we also knew whether there were contraindications to the nitro. Even then, following a protocol doesn’t mean the protocol itself was appropriate.
To better assess prehospital care, we have to look beyond response times, procedural proficiency and compliance with protocols — quantifiers that are occasionally useful, but not definitive enough to tell us how effectively we’re managing our patients.
Defining quality in EMS
Quality is a key attribute of products and services. It would be hard to find a large company in any industry that doesn’t advocate quality improvement. Businesses that succeed in optimizing quality while controlling costs often gain profitability.
In EMS, quality improvement is a popular catchphrase, but not always a well-defined process. To upgrade the quality of prehospital care — our flagship service — we’d have to develop benchmarks that quantify the current state of care, set goals for improvement and track progress against those goals. Most of us are still stuck on the first step.
If neither time nor money were issues, what would we want to know about quality of care? Here’s my short list:
- The appropriateness of field interventions, by provider.
- Etiologies presenting the greatest challenges to caregivers and agencies.
- Individual and system-wide performance trends.
Items 2 and 3 depend on 1, so an even shorter list would simply be knowing the suitability of care in the field. To get there, we need standards to measure prehospital treatment against. Hospital feedback is the yardstick favored by two robust quality-improvement tools.
Manual quality-of-care measurement
The Prehospital Evaluation Technique, known as PET, is an intensive, quantitative process that compares EMS care to ED diagnoses, after assigning predefined codes to each. For example, a patient treated en route for a COPD exacerbation, but diagnosed at the hospital with acute pulmonary edema, would yield a pair of codes that don’t match. Quality of care is expressed as a percentage of matches and can be summarized by provider, agency, region or any other subset of calls.
The matching process is subtly complex. Cases involving multiple presenting problems or nonspecific hospital diagnoses like “syncope” or “respiratory distress” aren’t distinct enough for discrete codes to be assigned. PET recognizes that and doesn’t try to grade such ambiguities.
If the patient in the above example had both a COPD and CHF history and was treated prehospitally for the former so effectively that only signs of the latter were evident at the ED, that “mismatch” wouldn’t count against the crew. PET also ignores patients who present either in the field or at the hospital with conditions outside caregivers’ scope of practice.
PET is a process, not a product. It’s free to use. There’s a procedure for initializing PET and an algorithm driving assignment of codes. Administrative support is needed to determine matches, but for most PET users, the biggest issue is obtaining detailed hospital feedback. A Michigan company, Medmerge Solutions, has found a way to solve that problem while automating outcome-based evaluation of EMS care.
Automated quality-of-care measurement
Foundations, hosted software developed by Medmerge, compares patient-specific results culled from participating hospitals to prehospital care.
“There are two different reporting mechanisms within the software,” says Medmerge managing partner and paramedic Glenn Garwood. “One is a physician note, which includes ED diagnoses; the other is messages sent from within the hospital further explaining clinical outcomes.”
The result is real-time commentary to caregivers concerning the accuracy of their diagnoses and the appropriateness of their therapeutics. Unlike PET, Foundations provides most of that feedback qualitatively rather than quantitatively — an approach that permits a deeper understanding of each case.
“With Foundations, EMS providers get spoon-fed plain-English feedback about their patients from the receiving hospitals,” says Medmerge co-founder Dr. Dave Bauer. “I don’t know of any other system that affords that kind of follow up.”
Foundations prices vary according to agency specifications, but licensing fees average $130 to $200 per month. Implementation generally takes four to five months. “It helps to have a champion within your organization who’s interested in reviewing outcomes,” adds Bauer.
Making quality count
We can use outcome-based tools like PET and Foundations to assess field care, but how do we interpret the results? There are no universal guidelines for proportions of matches or favorable remarks among cases. The best we can do is compare today’s data to last week’s, last month’s and last year’s.
Are fewer STEMIs being missed? Are more strokes being identified? Are we recognizing and treating presenting problems within our scopes of practice more consistently than yesterday? To answer those and other results-oriented questions, we have to begin with agency-level benchmarks based on internal comparisons — the performance of one paramedic versus another, for example, rather than compliance with nonexistent statewide or nationwide criteria.
Is that a problem? I don’t think so. After two decades of doing the same with cardiac-arrest outcomes, we acknowledge, at least, that a three percent survival-to-discharge rate is unacceptable anywhere. As new data is collected, scrutinized and shared, more meaningful resuscitation targets for cardiac arrests with various presentations and etiologies are emerging.
The next step is to achieve a critical mass of participation in outcome-based evaluation of prehospital cases other than cardiac arrests. The tools exist; the commitment is still needed.