Join Section

Quality Improvement & Patient Safety Section Newsletter - September 2006, Vol 7, #4

Quality Improvement & Patient Safety Section

circle_arrow Join Us For the Annual Section Meeting in New Orleans!
circle_arrow From the Chair
circle_arrow A New Model for Patient Safety and Quality in Emergency Medicine
circle_arrow Core Measure Update for STEMI: July 06
circle_arrow Chief Complaint Based Quality Indicators
circle_arrow The Emergency Medicine Quality and Patient Safety Course
circle_arrow Institute of Medicine Report: The Future of Emergency Care
circle_arrow Quality and Safety Articles


Newsletter Index


Quality Improvement & Patient Safety Section

Join Us For the Annual Section Meeting in New Orleans!

The Quality Improvement and Patient Safety Section will have its annual meeting on Monday, October 16, in conjunction with Scientific Assembly 2006 in New Orleans.

The meeting is scheduled for 10:30 am to 12:00 noon in room 399 of the Convention Center. Be sure to check the schedules on site, as meeting times and location are subject to change.

In addition to the business meeting, this year's agenda includes election of new section officers and an educational presentation for the following scheduled speakers:

Jack Kelly, DO, FACEP
Cherri D. Hobgood, MD, FACEP
Robert L. Wears, MD, FACEP
David P. John, MD, FACEP

The section is accepting nominations for Newsletter Editor and Chair-Elect, with the elections to be held at our section meeting. Please submit your interest to our new staff liaison, Angela Franklin, at qips.section@acep.org.

 


Back to Top

Helmut W. Meisl, MD, FACEP

From the Chair

Helmut W. Meisl, MD, FACEP

This will be the final chair note during my term, as my chair duties will be ending this October. I have found the experience exciting, but plan to continue contributing to the section as immediate past chair, as I have benefited from the past chairs David P. John, MD, FACEP, and David Meyers, MD, FACEP. The new chair, Jack Kelly, DO, FACEP, will continue to move the section along in these exciting quality improvement times, with all the national emphasis on core measures, pay-for-performance, JCAHO, overcrowding, etc. My only personal regret is not becoming more involved in the section much earlier in my career. Therefore, I encourage you to become involved with the section, and our group always needs help. We are all volunteers. Elections for officers (chair-elect and secretary/newsletter editor) are coming up. Consider running for a position, either now or in the future. There is always a need for participation, be it newsletter articles, the education section grant, e-mail interactions, or any other ideas. We hope to see many of you at our section meeting Monday, October 16 from 10:30 am to 12:00 noon at the Convention Center in New Orleans. We have planned an exciting session with interesting speakers. For your information, our section membership is slowly increasing, with 216 members now, and this is the 4th newsletter this section year. This we can be proud of, as not many sections achieve this goal. I want to thank all who helped me and the section this past year, including the past chairs, our chair-elect Jack Kelly, our newsletter editor Dickson Cheung, our staff liaison Marilyn Bromley, and all others who proved support.

In my final chair note, I will take the liberty of reflecting on some of my ideas of quality and the people involved in quality. First, what is quality? It is actually very difficult to define, but as the saying goes "We all know what it is when we see it, but cannot define it." One can equate it to perfection or even beauty (I remember reading this once, so to be fair, I have to give the reference (JAMA. 1991;266:1817-1823), where the authors state that "quality is like beauty: it has a positive connotation, but denotes nothing measurable." There is a fair bit of truth to this, and we try to apply various working definitions to quality - for example, best practice guidelines and an attempt to measure it using outcomes, benchmarks, and statistics. Various articles and books have been written trying to define quality, but like beauty, quality is more often in the eye of the beholder. We emergency physicians judge the treatment of an ill patient with chest pain who needs a rapid decision for thrombolytic therapy differently than a cardiologist in the office who has a patient with chronic chest pain.

So on to a quality physician...what is he or she? Is it the person who is very well read and knowledgeable about the latest medical information, but has poor nurse and patient interaction skills or receives multiple complaints? Is it the physician who is well liked by all, but whose clinical skills are marginal? Is it the physician who is competent clinically, but often shows up late for shifts? Obviously, the highest quality physician combines the best of all these attributes of clinical competence, knowledge, technical skills, personal interaction skills, and reliability, but no one can truly be a super-physician (though we should all try, and that is where the quality director comes in - more on this later). Also, physician quality requirements vary depending on location, as in a teaching hospital with strong academic needs versus a community hospital with strong personal interaction skills.

Now on to being a successful quality improvement director, that is, what makes a high-quality director? The expertise goes well beyond the ability to review some charts at a monthly meeting. The main characteristic has been patience and perseverance. The chart reviews, audits, complaints, and meetings are repetitive and seem to be never ending, with often limited results. Reviewing a two-foot-high pile of charts late at night is always a test. Attempting quality changes involves multiple interactions with nursing, administration, and medical staff who often have other competing priorities. Next is clinical competence. One cannot review charts or medical treatment or talk to one's peers or speak up to the medical staff if one is not clinically competent, well read, and versed in the literature. One needs this credibility. Another aspect is educational skills - the ability to make quality improvement interesting and exciting with educational presentations and literature updates, even to the medical staff such as grand rounds. Included is staying abreast of the regulatory requirements from the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), the Centers for Medicare and Medicaid Services (CMS), and other obligatory agencies. Very important are political and negotiating skills from acquiring some new device or medication for the ED, to deflect complaints from patients, administration or other physicians, and the usual defense of the ED and survival at hospital meetings. Part of this is salesmanship to try to sell the necessity of a government, JCAHO, or administration mandate to the other physicians. Compassion is important when a fellow physician is subjected to a malpractice suit or is found to be impaired in some manner. A truly necessary component is devotion. How can one put up with the endless meetings, complaints, and audits if one is not truly passionate about improving the quality and competence of oneself and others? There may be some other attributes that I missed. Finally, how does one quantify the abilities of a quality improvement director? Generally, the best measure is that we are still in the position from year to year.

We quality improvement directors do have an interesting and exciting life. Hope to see many of you in New Orleans!

 


 

Back to Top

A New Model for Patient Safety and Quality in Emergency Medicine

Julius Pham, MD
Dickson Cheung, MD, MBA

Recognizing a problem is only the first step in fixing it. In 1999, the Institute of Medicine's (IOM's) landmark publication, To Err Is Human, highlighted the dangers of our health care system.1 Unfortunately, change was more difficult, and progress was slower than expected. More recently, the IOM described the emergency department (ED) as a system "at the breaking point."2 ACEP's own Report Card calls issues facing ED care a national crisis. Why is it so hard to advance the quality and safety of medicine in general and emergency medicine in particular? Transforming the nebulous pursuit of safety and quality into concrete aims is not easy. The core difficulty in making and tracking progress is the lack of effective quantifiable measures and a conceptual framework to implement change.

There is more to developing valid metrics than meets the eye. Current quality and safety parameters are vulnerable to methodological error. They are often expressed as rates to indicate how fast an event or defect is occurring in a population.3 To be a valid rate, the number (event or harm) and denominator (population at risk) should be clearly defined. Then a surveillance system needs to be in place to capture the events. If tracking all events in the numerator and denominator of the rate is not feasible, then a representative sample must suffice. Unfortunately, most current safety parameters do not fully appreciate the pitfalls in obtaining this representative sample. For example, patient safety reporting systems (PSRSs) recommended by the IOM and recently signed into law, are a common method to identify patient safety issues. Many users, however, interpret the information obtained from PSRSs as valid rates. In reality, PSRS rates of reported adverse events likely contain significant bias: caregivers report a non-random sample from an unknown probability distribution in which the magnitude and direction of bias in reporting is unknown. And the population at risk is unknown. Therefore, as a measure of safety, PSRS rates, trends in rates or benchmarking of this information, are likely biased and should be interpreted with caution.

One reason it has been particularly difficult to move forward quality and safety efforts in emergency medicine is the lack of ED-devoted metrics. The bulk of current measures focus on hospital-wide conditions (acute myocardial infarction, congestive heart failure, community acquired pneumonia, specific surgeries, etc). These measures are not good indicators of how well EDs are functioning. The emergency medicine (EM) safety community must develop measures that are valid, feasible to attain, and unique to EM. In this newsletter, we propose a framework, based on work in the intensive care unit, for measuring safety and allowing us to know if we are any safer today than we were yesterday.

Donabedian developed a model for measuring quality nearly half a century ago that can also be used to provide an outline for measuring safety.4 The main components of his model include structure (is the environment safe?), process (is safe care being delivered?), and outcome (are patients being harmed?). While most current measures of quality focus only on outcomes, the steps to reaching those goals stem from the structure and process in which care is delivered. Our model focuses on four key questions to address the issue of whether we are "any safer today than yesterday?"

 

  • How often do we harm patients? (Outcome)
  • How often do we do what we should do? (Process)
  • How often do we learn from defects? (Process)
  • Have we created a safe culture? (Structure)

How often do we harm patients?
Although there are a number of ways caregivers in EM can harm patients, the most obvious is misdiagnosis. A true misdiagnosis rate would be an excellent choice for a safety metric. The rate is difficult to measure because the numerator is subject to reporting bias and misclassification bias. The denominator is inaccurate because the population at risk is largely unknown.

To capture the essence of a misdiagnosis metric, we suggest using the rate of patients returning to the emergency department within seventy-two hours with a diagnosis of myocardial infarction, pulmonary embolism, aortic dissection, death, or need for intensive care unit admission. The current rate of these bundled defects according to a national sample is three patients per 10,000 visits.5 One limitation with this measure is that not all missed diagnoses will return to the same ED which would lead to an underestimation of harm. A second limitation is that not every return visit to the ED is indicative of patient harm. The advantages of this measure are that it is easily collected and can be affected by the treating physician.

A second measure of harm is the proportion of patients who present to the emergency department but leave prior to evaluation due to long wait times.6,7 As many as 11% of patients who leave without being seen ultimately require hospitalization within the week.8 This rate then becomes a surrogate marker of patients that are in harm's way. The advantage of this measure is that most hospitals already track this statistic as a measure of operational efficiency.

A frequently overlooked mode of harm is leaving our patients in unnecessary pain. Although medical practitioners have historically downplayed the morbidity of pain, most patients would beg to differ. Every time we do not address a patient's pain adequately, we harm them. The measure we propose is the proportion of patients with long bone fractures that do not receive pain medications. We purposely limited the study population to patients with an indisputable need for analgesia and did not specify which route or how much pain medication was required.

How often do we do what we should do?
A common approach to measuring quality is evaluating how often we do what we should do. This approach has several advantages. First, it does not depend on the many other factors that may affect ultimate patient safety or quality. These factors include patient co-morbidities, severity of current illness, patient preferences, other treatments, and or chance occurrences. Second, adverse events may occur infrequently despite providing poor quality or unsafe care. Measuring how often we do what we should do provides us many opportunities (each patient encounter) to evaluate the safety of care.

Sepsis is a common and life-threatening condition that presents to the ED. Evidence suggests that care provided during the short ED stay can improve long-term (30-day) mortality. Several professional organizations have made evidence-based recommendations for management of these patients in the ED. We propose four measures that balance validity as a measure of quality and feasibility in collection of data: 1) patients who have septic criteria that receive blood cultures, lactate level, urinalysis, urine culture, CXR, and EKG; 2) patients with lactate >4 or who are hypotensive that receive an adequate fluid bolus; 3) vasopressors if not responsive to the initial fluid bolus; and 4) patients who are septic that receive antibiotics within 3 hours.

How often do we learn from defects?
Defects in care are sentinel events that "you would not want to happen again." Fortunately, defects are rare events. Because of the infrequency of these occurrences, measuring defects as a rate is problematic. Chance occurrence or patient factors are likely to contribute to the defect rate. However, the defect itself is informative. An alternative approach is to monitor how often we learn from mistakes rather than the rate of mistakes.

We have developed an algorithm for evaluating defects in medical care. This algorithm facilitates a systematic approach to determining causes and contributing factors of defects. After examining the causes of this defect, how do we know if we are any less likely to repeat this event or whether we have learned from this defect? There are three ways of measuring how well we have learned from our errors. First, we can measure the presence of a policy or procedure change to prevent the event from recurring. Second, we can measure if and how well aware the staff are of such a program. Finally, we can measure whether the policy/procedure was actually implemented.

For example, a defect might be identified from a patient who died from a myocardial infarction while awaiting treatment in the waiting room. Using the tool to systematically evaluate the event, the ED determines that a delay in the diagnosis of myocardial infarction contributed to the defect. To examine how likely we are to prevent this error from occurring again or how well we learned from this defect, we could measure one of three things:

  1. Was a written policy/procedure created to prevent this event from recurring?
    All patients presenting with chest pain will have an EKG within 10 minutes.
  2. How well aware are the staff of such a policy/procedure?
    What percent of the surveyed staff know that there is an EKG policy for patients presenting with chest pain?
  3. How well is this policy/procedure actually implemented?
    What percent of chest pain patients actually receive an EKG within 10 minutes?

It is not clear which of these three measurements is the best approach. With each measurement type there are costs and benefits that must be taken into consideration. The closer the measurement is to the health care provider, the more valid the information. However, provider-level data is costly to obtain. The best measurement is probably specific to each situation.

Do we have a culture of safety?
Our final measure of safety is the culture of the unit. Other industries and patient safety efforts have shown that the context in which care is delivered, often referred to as safety culture, has an important influence on patient outcomes. Safety culture is important, measurable, and improvable. Safety culture appears to measure how caregivers communicate and interact. Failures in communication are the most common factor that contributes to sentinel events and errors. Validated tools to measure safety culture, such as the rigorous and commonly used Safety Attitudes Questionnaire (SAQ), evaluate caregivers' perceptions of teamwork and safety climate. Higher safety culture scores are associated with lower rates of nurse turnover, CRBSI, decubitus ulcers, and in-hospital mortality. By measuring safety culture, providers can obtain a valid and feasible assessment of teamwork and safety culture in their clinical area. Scores on the SAQ are presented as the percent of staff in a patient care area who report positive safety and teamwork climate or the percent of patient care areas within a department or hospital in which 80% of staff report positive safety and teamwork climate. Our goal is for 80% of staff in all units to report positive safety and teamwork climates.

Our Experience with this Model in the ICU
We have applied this approach to nearly 200 intensive care units (ICUs) in Michigan, New Jersey, Rhode Island, and the Johns Hopkins Hospital. To evaluate how often we harm, we measured catheter-related blood stream infections (CRBSI). To evaluate how often we do what we should do, we assessed compliance with an evidence-based ventilator bundle. To evaluate whether we learned from defects, we tracked the number of months in which each ICU completed the "learning from defects" form. To evaluate safety culture, we measured the percent of ICUs in which at least 80% of staff reported positive safety and teamwork climate. The results of the scorecard for three ICUs at the Johns Hopkins Hospital are presented in Table 4. Using directed interventions, we have seen quantitative improvements in the safety of these health systems.

Limitations of the Model
We recognize that there are limitations to our model and that this is a "work in progress." This model has not been tested in the ED. Efforts are currently underway to implement this approach across several EDs in Michigan and at Johns Hopkins Hospital. Second, there is limited scientific data to support the validity of these measures in the ED. These measures have been successful in the ICU, but only with implementation in the ED can we evaluate their true value. Despite these limitations, these measures give us a quantitative method of answering the question, "Are we providing safer care?"

Conclusion
Quality and safety is a young emerging science in the ED. There is much to be learned, and we lack the scientific evidence to make policy decisions with certainty. Instead of being paralyzed by indecision however, we have proposed a model and approach to measuring quality and safety. We hope this model serves as a starting point for improving safety, honest discussion, alternative models, and future research. Our goal is to begin moving the science of ED quality and safety in a forward direction.

Table 1: Framework of a Score Card for Patient Safety and Quality

Domain

Definition

Examples from ED

How often do we harm patients? What percent of patients in the ED have we harmed?

- Misdiagnosis
- Patients left without being seen
- Patients not treated for their pain

How often do we do what we should do? Using either nationally validated process measures, what percent of patients receive evidence-based interventions? - Sepsis bundle by Rivers
- Chief-complaint-based quality indicators
How often do we learn from defects? Evidence that we are learning from our mistakes - Policy regarding central lines
- EKG
How well have we created a culture of safety? Annual assessment of safety culture at the unit level - Survey of safety/teamwork climate
  1. Institute of Medicine: To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press, 1999.
  2. Institute of Medicine: Hospital-Based Emergency Care: At the Breaking Point. Washington, DC: National Academy Press, 2006.
  3. Most rates in patient safety are actually proportions, or the fraction of the population affected with no measure of time included. In this paper, we will use the term rate to refer to either a rate or a proportion.
  4. Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44:166-206.
  5. National Hospital Ambulatory Medical Care Survey: 2004 Emergency Department Summary released June 23, 2006.
  6. Stock LM, Bradley GE, Lewis RJ, et al. Ann Emerg Med. Feb 1994;23(2):294-8.
  7. Fernandes CM, Daya MR, Barry S, et al. Emergency department patients who leave without seeing a physician: the Toronto Hospital experience. Ann Emerg Med. Dec 1994;24(6):1092-6.
  8. Baker DW, Stevens CD, Brook RH. Patients who leave a public hospital emergency department without being seen by a physician. Causes and consequences. JAMA. Aug 1991;266 (8):1085-1090.

Back to Top

Core Measure Update for STEMI: July 06

Jack Kelly, DO, FACEP

CMS recently "moved the cheese" again! In case you didn't know yet, if you have a STEMI and there is a delay to fibrinolysis, you must document "acceptable documentation of reasons for a delay." The CMS Chart Abstractor Rules read: "Is there a reason for a delay in initiating fibrinolytic therapy after hospital arrival?"..."If reasons are not mentioned in the context of fibrinolytic therapy or reperfusion timing, do not make inferences" - this tells the chart abstractor to scrutinize the ED documentation!

These are the acceptable examples that must be written where there is an ED delay to fibrinolytics (quotes are by CMS, inferring that it must be written this way only):

  1. "Patient initially refused lytic therapy."
  2. "Hold on fibrinolytics. Will do CAT scan to r/o bleed."
  3. "Patient waiting for family to arrive-wishes to consult with them re: thrombolysis."
  4. "Patient presented to ER in full cardiac arrest. ACLS protocol instituted. Unable to begin thrombolytics until patient stable."
  5. "Patient wants to speak with clergy before starting lytics. Clergy paged."
  6. "No urgent need for fibrinolytics."
  7. "Will consult with cardiology to see if fibrinolytics is warranted at this time."
  8. "Lytic Therapy not indicated." (fibrinolytic therapy not started earlier because physician initially felt it was not indicated)
  9. "Lytic Therapy held: Comfort Measures Only."

These are documentation examples by CMS that are not acceptable:

  1. "Patient wanting to talk with wife before further medical intervention." (Not specific enough - no mention of reperfusion/fibrinolytic therapy.)
  2. "ST-elevation on initial EKG resolved. Chest pain now recurring. Begin lytics." (Requires clinical judgment - physician did not make linkage to delay in fibrinolysis).
  3. "Patient presented to ED with non-cardiac symptoms. AMI confirmed later that morning. Fibrinolytic therapy was started." (Requires clinical judgment - physician did not make linkage to delay in fibrinolysis).

For STEMI cases delayed in going to the cath lab, the emergency physician must document the similar "CMS phraseology" using the word PCI, as shown in these examples:

  1. "Patient initially refused PCI."
  2. "Hold PCI. Will do TEE to r/o aortic dissection."
  3. "Hold PCI: Comfort Measures Only"---this is really important! "DNR" is not good enough; the term "Comfort Measures Only" best describes just what the patient wants at end of life.
  4. "Patient wishes to consult family before PCI."
  5. "Patient presented to ER in full cardiac arrest, unable to do PCI until stable."
  6. "Will consult with Cardiology to see if PCI warranted."

Finally, make sure there is legible documentation of ASA and Beta-Blocker given. If ASA was given by EMS, document this. If the patient cannot get ASA, document "ASA held due to..." Similarly, in Beta-Blocker documentation: "Beta-Blocker not ordered due to hypotension (<90/x); Bradycardia (<60); Heart Block;" or "Comfort Measures Only."

I hope this helps you in the latest challenges in Core Measures Quality. Placing these "CMS-approved phrases" into your EMR will really help the emergency physician use the proper chart abstractor terminology. If you are still using paper, this will be a learning curve that will take time. The sooner you implement these changes, the easier it will be for those "tough STEMI cases" that did not get timely thrombolytics or PCI to be taken out by the chart abstractor.


Back to Top

Chief Complaint Based Quality Indicators

Dickson Cheung, MD
Syncope Subcommittee

Under the leadership of David John, MD, FACEP, the QIPS Section obtained a section grant to develop Chief Complaint Based Quality Indicators (CCBQI). The goal was to develop quality indicators relevant to emergency medicine, which could be used in a wide variety of emergency departments. Syncope, shortness of breath, altered mental status, chest pain, and abdominal pain were selected, as these are frequent presentations and high-risk areas. Missing a diagnosis in these patients could be potentially serious and with significant liability risk.

Five groups of people from our section spent considerable time on conference calls discussing, crafting, and revising these indicators. It was not an easy task. It is much more straightforward to devise a measure for a specific diagnosis - such as aspirin for myocardial infarction - but very difficult to devise a measure for the broad area of chest pain. The result was balance between being very basic, such as pulse oximetry on all SOB patients, or too complex, such as ordering of CAT scans for abdominal pain.

As an example, below are the syncope indicators. Space limits publishing all in one newsletter. Contact me if you would like more of the others. These are meant for your information and to assist you in your QI projects. They were a consensus opinion of our group and can be revised to suit your needs. They are not meant to be an ACEP or other organizational standard or requirement.

Diagnoses associated with Syncope:
Abdominal aortic aneurysm (AAA)
Anemia
Arrhythmia
Bradycardia
Dehydration
Drug overdose
Hypoglycemia
Hypotension
Hypovolemia
Hypoxia
Medication reaction (alpha blockers)
Micturition syncope
Pulmonary embolus (PE)
Vasovagal syncope

High-risk or fatal diagnoses likely to be missed:
Arrhythmia
AAA
PE

Exclusions that apply to every syncope quality indicator:

  • Known cause for syncope (hypoglycemia, hypotension, dehydration, etc.)
  • Seizure disorder, narcolepsy, vasovagal causes apparent

Quality Table (view pdf file)

References

  1. Crane SD. Risk stratification of patients with syncope in an accident and emergency department. Emerg Med J. 2002;19:23-27.
  2. Linzer M, Yang EH, Estes M, et al. Diagnosing syncope. Part 1: Value of history, physical examination and electrocardiography. Clinical efficacy assessment project of the American College of Physicians. Ann Intern Med. 1997;126:989-996.
  3. Linzer M, Yang EH, Estes M, et al. Diagnosing syncope. Part 2: Unexplained syncope. Clinical efficacy assessment project of the American College of Physicians. Ann Intern Med. 1997;126:989-996.
  4. Kapoor W, Hanusa B. Is syncope a risk factor for poor outcomes? Comparison of patients with and without syncope. Am J Med. 1996;100:647-655.
  5. Sarasin FP, Louis-Simonet M, Carballo D. Prevalence of orthostatic hypotension among patients presenting with syncope in the ED. Am J Emerg Med. 2002;20(6):497-501.
  6. Atkins D, Hanusa B, Sefcik T, et al. Syncope and orthostatic hypotension. Am J Med. 1990;91:179-185.
  7. Kapoor WN, Karpf M, Wieland S. A prospective evaluation and follow-up of patients with syncope. N Engl J Med. 1983;309:197-204.
  8. Day SC, Cook EF, Funkenstein H, et al. Evaluation and outcome of emergency room patients with transient loss of consciousness. Am J Med. 1982;73:15-23.
  9. Eagle KA, Black HR. The impact of diagnostic tests in evaluating patients with syncope. Yale J Biol Med. 1983;56:1-8.
  10. Kapoor WN. Evaluation and outcome of patients with syncope. Medicine (Baltimore). 1990;69:160-175.
  11. Martin GJ, Adams SL, Martin HG, et al. Prospective evaluation of syncope. Ann Emerg Med. 1984;13:499-504.
  12. Ben-Chetrit E, Flugelman M, Eliakim M. Syncope: a retrospective study of 101 hospitalized patients. Isr J Med Sci. 1985;21:950-953
  13. Martin T, Hanusa B, Kapoor W. Risk stratification of patients with syncope. Ann Emerg Med. 1997;29:459-466.
  14. Georgeson S, Linzer M, Griffith JL. Acute cardiac ischemia in patients with syncope: importance of the initial ECG. J Gen Intern Med. 1992;7(4):379-386.

Back to Top

The Emergency Medicine Quality and Patient Safety Course

Helmut W. Meisl, MD, FACEP
David P. John, MD, FACEP

I want to provide a brief summary of the latest section grant for which our section has received approval. The idea of a QI education course was discussed among us for some time, then more at the May 2006 ACEP Leadership Conference, and finally a grant application was again superbly written by David John. Our section can be proud to have had three successful grant applications approved over the last three years (the others being the Chief Complaint Quality Indicators and the Compilation of Tips for Patient Safety). Below is a summary of the grant, using some of the wording from the application.

Rationale
There are more than 4,000 emergency departments across the country, and each of them has quality initiatives. Unfortunately, no standard approach to quality in EM exists today. Many of the important successes in quality are never shared, and each of us reinvents the wheel from scratch. The Centers for Medicare and Medicaid Services (CMS) and the Joint Commission for Accreditation of Healthcare Organizations (JCAHO) have begun to establish quality standards in medicine. All of us working in quality review cases, collect data, develop systems improvements, and educate our staff, but we rarely share information with other institutions or at national meetings.

The QIPS Section is the single largest group in ACEP representing issues of quality in emergency medicine. Many of us lecture, write grants and book chapters, and represent the College in matters of quality of care. In spite of this, we rarely share the practical matters of how to do a case review, how to collect data, how to education our staff, etc. Our grant, in a nutshell, is to develop a practical, case-based one-half day course for physicians and nurses who work in EM quality. The course will provide the fundamentals for a successful quality program for both novices and experienced physicians and nurses.

Objectives

  1. To share information among those working in quality.
  2. To begin to standardize the approach to ED quality.
  3. To educate experienced quality directors and those new to the field.
  4. To develop an educational network among our peers.
  5. 5. To develop leadership in quality.

Course Design
It is planned to have four lectures, one hour each, with faculty from within the section. The course will be case-based, and the Grant Task Force will pool our quality reviews and choose those that best demonstrate our educational goals. This will not be a "cookbook" course, but rather will teach individuals to adapt their skills to any setting.

Lecture outline plan:

  1. The Case Review - How to break down a quality case, identify problem areas, categorize errors, and learn from the exercise in a non-punitive setting.
  2. Data Collection - Once you have reviewed the case what do you do with it? We will emphasize data collection, trend analysis, peer review, etc.
  3. Systems Improvement - Now that you have reviewed the cases and analyzed the data, how do you prevent future occurrences? Education and system design will be discussed with examples.
  4. Sharing Our Successes - The last hour will be spent with a panel presentation and discussion by experts in the field (all section members). Successful changes that have improved ED quality will be presented and discussed by the group. Basic elements of systems improvement will be reinforced with case examples.

A national venue would be ideal to capture a large segment of individuals, and it is planned to attach the course to the ACEP Spring Congress, with another possibility being the Sunday before the Leadership & Advocacy Conference.

Ultimately, about 30-40 individuals will be involved in course design and for the course to be ready by Spring 2007.

Please contact your staff liaison, Angela Franklin, at qips.section@acep.org if you want to be part of this effort. It should be exciting.


Back to Top

Institute of Medicine Report: The Future of Emergency Care

Robert I. Broida, MD, FACEP

The Institute of Medicine (IOM) is a major player in US health care. They are an agency of the National Academy of Sciences, providing advice to policymakers on health care matters. Their landmark 1999 report, To Err is Human: Building A Safer Health System, helped launch many of the patient safety initiatives that emergency physicians see every day.

In June, the IOM's Committee on the Future of Emergency Care released a trio of reports addressing the "crisis" in emergency care. Studying the issues since 2003, their mission was to "examine the state of emergency care in the US, to create a vision for the future of emergency care, including trauma care, and to make recommendations to help the nation achieve that vision."

The 314-page IOM report confirms what emergency physicians already know, concluding that "the nation's emergency medical system as a whole is overburdened, underfunded, and highly fragmented." The report specifically addresses key issues such as ED overcrowding, ambulance diversion, and boarding and surge capacity, noting "ambulances are turned away from emergency departments once every minute on average and patients in many areas may wait hours or even days for a hospital bed. Moreover, the system is ill-prepared to handle surges from disasters such as hurricanes, terrorist attacks, or disease outbreaks."

In this brief review, I will address only the IOM ED report, Hospital-Based Emergency Care: At the Breaking Point. Specific discussion of the other two reports (pediatric EDs and EMS) will appear in future editions.

Highlights

  1. Overcrowding. Most EDs are operating near or at capacity. The US had a net loss of 703 hospitals between 1993 and 2003. Inpatient beds dropped by 198,000 (17%). During the same time, the population grew by 12%, hospital admissions increased by 13% and ED visits rose by 26% to 113.9 million in 2003. 91% of EDs report overcrowding as a problem. 40% report it daily, which stresses both providers and patients, potentially impacting quality of care and increasing medical errors. They specifically noted a July 2002 alert that tied 50 hospital deaths to treatment delays.

  2. Boarding. They cited ACEP's 2003 survey reporting that 73% of EDs experienced boarding of admitted patients on a typical Monday evening. It is not unusual for busy EDs to board patients more than 48 hours. This has effects similar to overcrowding, but also taints the patient's view of the hospital admission experience (ie, negative patient satisfaction).

  3. EMS Diversion. Almost 50% of all hospitals (70% of urban) reported using diversion status in 2004. In 2003, 501,000 ambulances were diverted, averaging 1 per minute.

  4. Left Without Being Seen. The report also cited a study showing that in 2003, 1.9 million patients left without being seen (1.7% of all ED patients vs. 1.1% in 1993). An additional 1% left AMA (before treatment was completed).

  5. Safety Net. "Hospital emergency departments are the provider of last resort for millions of patients who are uninsured or lack adequate access to care from community providers."

  6. Medical Necessity. 50.4% of visits were classified as emergent or urgent. 32.8% were classified as non-urgent (or semi-urgent). The IOM report discusses the issue of medical necessity for emergency care and correctly questions whether this should be "determined by the patient's signs and symptoms at the time of arrival" (page 34).

  7. Financial Class. Medicaid ED utilization was 81 visits per 100 persons in 2003, up from 65.4 in 2002. Medicare ED utilization was 52.4 visits per 100 enrollees in 2003. Self-pay was 41.4 visits per 100 persons. Private insurance was lowest at 21.5%.

  8. Reimbursement. The average combined physician/hospital ED charge was $943 in 2001. The average payment received was $492 (52%). The average charge increased by 49% since 1996, while the average payment only increased by 29%.

  9. Maldistribution. 21% of Americans live in rural areas, while only 12% of emergency physicians practice in rural settings (down from 15% in 1997). 67% of these physicians are neither EM residency trained nor board certified.

  10. Research. Emergency medicine is not well represented in the National Institutes of Health (NIH), and the funding reflects this political reality. NIH training grants to emergency medicine departments averaged $51.66 per graduating resident vs. over $5,000 per internal medicine resident in 2003 (and $12,500 per pathology resident).

Recommendations

  1. Improve Operational Efficiency. The IOM report specifically charges hospital CEOs with addressing patient flow problems. In order to facilitate this, they recommend that training in operations management be promoted by professional and accrediting organizations. They also specifically recommend that the JCAHO "reinstate strong standards that directly address ED crowding, boarding and diversion" (rather than the watered-down standards that were promulgated due to industry pressure).

  2. Increase use of Clinical Decision Units. The IOM recommends that the Centers for Medicare and Medicaid Services (CMS) "remove the current restrictions on the medical conditions that are eligible for separate CDU payment."

  3. Prohibit Boarding and Diversion. The IOM views boarding and diversion as "antithetical to quality medical care," recommending that hospitals end both practices except in the setting of a mass casualty event. They also call upon CMS to convene a working group to develop standards and guidelines for "enforcement of these standards."

  4. Information Technology. Recognizing that "emergency physicians are all too often deprived of critical patient information" and that EDs frequently operate with little operational data, the IOM recommends that hospitals adopt I/T solutions to enhance efficiency and improve safety and quality. They recommend dashboard systems that track and coordinate patient flow, communications systems that will link to records or providers, clinical decision-support tools, and documentation systems.

  5. Funding. IOM recommends dedicated funding to hospitals that provide "significant" amounts of uncompensated care, with an initial outlay of $50 million to be doled out by CMS based upon need.

  6. Disaster Preparedness. They also urge Congress to authorize large increases in disaster preparedness funding for FY 2007 to address such issues such as improving the trauma care system, enhancing surge capacity and EMS response, designing evidence-based training programs, increasing the number of decontamination showers, negative pressure rooms, personal protective equipment, ICU beds, and further research. Training and certification organizations are encouraged to incorporate disaster preparedness into the training and certification process.

  7. On-Call Specialists. The IOM notes that "providing emergency call has become unattractive to many specialists in critical fields such as neurosurgery and orthopedics" due to availability, reimbursement and liability concerns. They recommend regionalizing critical on-call specialty services between hospitals as one solution to this problem.

  8. Malpractice Reform. Due to the "extraordinary exposure to medical malpractice claims," the IOM urges Congress to appoint a commission to examine the impact of medical malpractice lawsuits and recommends state and federal action to mitigate the adverse impact of these lawsuits.

  9. Rural EDs. With chronic workforce shortages, the IOM recommends that rural EDs link up with academic centers for professional consultation, telemedicine, referral and transport, and CME.

  10. Research. The IOM recommends increased funding and coordination between federal agencies for emergency medicine research, including training of new investigators, development of multi-center research networks, and a dedicated center or institute.

  11. Regionalization. The IOM recommends that all EDs and trauma centers be categorized to best direct critically ill and injured patients to appropriate facilities on a regional basis. They also urge the development of evidence-based model prehospital care protocols. In addition, they recommend that Congress fund an $88 million "demonstration program, administered by the Health Resources and Services Administration, to promote regionalized, coordinated, and accountable emergency care systems throughout the country."

  12. Accountability. The IOM urged the Department of Health and Human Services (HHS) to create an expert panel to "develop evidence-based indicators of emergency care system performance," including structure and process measures, including the performance of individual providers and outcome measures (overtime). In order to address the patchwork of state and local emergency care systems, they recommend establishment of a national lead agency (within HHS) for emergency and trauma care by 2008.

Why it's important

  1. The IOM is solely interested in the public good, not the needs of various constituents.

  2. They are viewed as independent. Their views may mirror ours, but they are not tainted by personal interest.

  3. The IOM report is pushing JCAHO and CMS to adopt changes that will be beneficial to emergency medicine.

  4. The report links deficiencies in emergency care to decreased patient safety, which still has a lot of traction in health care.

  5. There is a lot of money involved: for research, for equipment and training, for uncompensated care, for technology, for regional systems, etc.

  6. They get a lot of things right:

    1. "When a hospital is full or its ancillary services are slow, ED crowding, inpatient boarding, and ambulance diversion are almost inevitable." (page 20)
    2. "boarder is a misnomer, because it implies that these patients require little care. In fact, ED boarders are often the sickest, most complex patients in the emergency department - which is why they require hospitalization." (page 30)
    3. "Increasingly, admitting physicians are insisting that EDs complete very detailed workups before they will admit a patient to the hospital." (page 37)
    4. "There is substantial evidence that reimbursement to safety net hospitals is inadequate to cover the costs of emergency and trauma care." (page 41)
    5. "While many of the factors contributing to ED overcrowding are outside the immediate control of the hospital, many more are the result of operational inefficiencies in the management of hospital patient flow." (page 103)

What you should do

  1. Become familiar with the content of the IOM report - especially those items which relate to your facility (ie, trauma, research, rural health, I/T, etc).

  2. Discuss this at your next ED committee meeting or with your CEO. Give them a copy of the 8-page Report Brief.

  3. Use the report to assist you in addressing persistent operational issues (ie, flow problems) with your hospital administration.

  4. Know the numbers: What do LWBSs cost at your hospital each month? How many tech hours would that pay for? How does your ED compare to the national averages?

  5. Write your Congressional Representative and ask them to review the IOM Report and support the sections of interest to you.

The IOM report is an excellent resource document and may well lay the ground work for significant changes in the emergency care landscape for years to come. As an independent body, their work lends key support to many of the issues that have plagued our specialty for years.


Back to Top

Quality and Safety Articles

Helmut Meisl, MD, FACEP

Here is a further list of recent articles that may interest you. These are compiled by AHRQ PSNet at (http://psnet.ahrq.gov/).

Simpson KR, James DC, Knox GE. Nurse-physician communication during labor and birth: implications for patient safety. J Obset Gynol Neonatal Nurs. 2006;35:547-556.

Hicks RW, Becker SC, Cousins DD. Harmful medication errors in children: a 5-year analysis of data from the USP's MEDMARX(R) program. J Pediatr Nurs. 2006;21:290-298.

Flores G. Language barriers to health care in the United States. N Engl J Med. 2006;355:229-231.

Makeham MA, Kidd MR, Saltman DC, et al. The Threats to Australian Patient Safety (TAPS) study: incidence of reported errors in general practice. Med J Aust. 2006;185:95-98.

Smith SR, Clancy CM. Medication therapy management programs: forming a new cornerstone for quality and safety in Medicare. Am J Med Qual. 2006;21:276-279.

Mills PD, Neily J, Mims E, et al. Improving the bar-coded medication administration system at the Department of Veterans Affairs. Am J Health Syst Pharm. 2006;63:1442-1447.

Hughes CM, Lapane KL. Nurses' and nursing assistants' perceptions of patient safety culture in nursing homes. Int J Qual Health Care. 2006;18:281-286. Epub 2006 Jul 19.

Porto G, Lauve R. Disruptive clinician behavior: a persistent threat to patient safety. Patient Safety Qual Healthc. July/August 2006.

Hougland P, Xu W, Pickard S, et al. Performance of International Classification of Diseases, 9th Revision, Clinical Modification codes as an adverse drug event surveillance system. Med Care. 2006;44:629-636.

Harris KT, Treanor CM, Salisbury ML. Improving patient safety with team coordination: challenges and strategies of implementation. J Obset Gynol Neonatal Nurs. 2006;35:557-566.

Marcus EN. The silent epidemic--the health effects of illiteracy. N Engl J Med. 2006;355:339-341.

Anderson JG, Ramanujam R, Hensel D, et al. The need for organizational change in patient safety initiatives. Int J Med Inform. 2006 Jul 24; [Epub ahead of print].

Davis TC, Wolf MS, Bass PF III, et al. Low literacy impairs comprehension of prescription drug warning labels. J Gen Inter Med. 2006;21:847-851.

Goldmann D. System failure versus personal accountability--the case for clean hands. N Engl J Med. 2006;355:121-123.

Macario A, Morris D, Morris S. Initial clinical evaluation of a handheld device for detecting retained surgical gauze sponges using radiofrequency identification technology. Arch Surg. 2006;141:659-662.

Preventing Medication Errors: Quality Chasm Series. Committee on Identifying and Preventing Medication Errors. Aspden P, Wolcott J, Bootman JL, Cronenwett LR, eds. Washington, DC: The National Academies Press; 2007.

Del Beccaro MA, Jeffries HE, Eisenberg MA, et al. Computerized provider order entry implementation: no association with increased mortality rates in an intensive care unit. Pediatrics. 2006;118:290-295.

Rana R, Afessa B, Keegan MT, et al, for the Transfusion in the ICU Interest Group. Evidence-based red cell transfusion in the critically ill: quality improvement using computerized physician order entry. Crit Care Med. 2006;34:1892-1897.

Ratanawongsa N, Bolen S, Howell EE, et al. Residents' perceptions of professionalism in training and practice: barriers, promoters, and duty hour requirements. J Gen Intern Med. 2006;21:758-763.

Schauberger CW, Larson P. Implementing patient safety practices in small ambulatory care settings. Jt Comm J Qual Patient Saf. 2006;32:419-425.

Ferner RE, McDowell SE. Doctors charged with manslaughter in the course of medical practice, 1795-2005: a literature review. J R Soc Med. 2006;99:309-314.

Hutter MM, Kellogg KC, Ferguson CM, et al. The impact of the 80-hour resident workweek on surgical residents and attending surgeons. Ann Surg. 2006;243:864-871; discussion 871-875.

Rosenstein AH, O'Daniel M. Impact and implications of disruptive behavior in the perioperative arena. J Am Coll Surg. 2006;203:96-105. Epub 2006 Jun 5.

Battles JB, Dixon NM, Borotkanics RJ, et al. Sensemaking of patient safety risks and hazards. Health Serv Res. [Epub June 9, 2006].

Boyle J. Wireless technologies and patient safety in hospitals. Telemed J E Health. 2006;12:373-382.

King ES, Moyer DV, Couturie MJ, et al. Getting doctors to report medical errors: project DISCLOSE. J Comm J Qual Patient Saf. 2006;32:382-392.

Warner A, Menachemi N, Brooks RG. Health literacy, medication errors, and health outcomes: is there a relationship? Hosp Pharm. 2006;41:542-551.

Kester K, Baxter J, Freudenthal K. Errors associated with medications removed from automated dispensing machines using override functions. Hosp Pharm. 2006;41:535-537.

Imhoff M, Kuhls S. Alarm algorithms in critical care monitoring. Anesth Analg. 2006;102:1525-1537.

Sexton JB, Holzmueller CG, Pronovost PJ, et al. Variation in caregiver perceptions of teamwork climate in labor and delivery units. J Perinatol. 2006 Jun 15; [Epub ahead of print].

Wachter RW. Expected and unanticipated consequences of the quality and information technology revolutions. JAMA. 2006;295:2780-2783.

Mazor KM, Reed GW, Yood RA, et al. Disclosure of medical errors: what factors influence how patients respond? J Gen Intern Med. 2006;21:704-710.

Singh H, Petersen LA, Thomas EJ. Understanding diagnostic errors in medicine: a lesson from aviation. Qual Saf Health Care. 2006;15:159-164.

Meurer JR, Yang H, Guse CE, et al, and the Wisconsin Medical Injury Prevention Program Research Group. Medical injuries among hospitalized children. Qual Saf Health Care. 2006;15:202-207.

Stebbing C, Kaushal R, Bates DW. Pediatric medication safety and the media: what does the public see? Pediatrics. 2006;117:1907-1914.

Arora V, Dunphy C, Chang VY, et al. The effects of on-duty napping on intern sleep time and fatigue. Ann Intern Med. 2006;144:792-798.

Lu TC, Tsai CL, Lee CC, et al. Preventable deaths in patients admitted from emergency department. Emerg Med J. 2006;23:452-455.

Miller MR, Clark JS, Lehmann CU. Computer based medication error reporting: insights and implications. Qual Saf Health Care. 2006;15:208-213.

Rampersaud YR, Moro ER, Neary MA, et al. Intraoperative adverse events and related postoperative complications in spine surgery: implications for enhancing patient safety founded on evidence-based protocols. Spine. 2006;31:1503-1510.

Espin S, Lingard L, Baker GR, et al. Persistence of unsafe practice in everyday work: an exploration of organizational and psychological factors constraining safety in the operating room. Qual Saf Health Care. 2006;15:165-170.

Morgan N, Luo X, Fortner C, et al. Opportunities for performance improvement in relation to medication administration during pediatric stabilization. Qual Saf Health Care. 2006;15:179-183.

Makary MA, Holzmueller CG, Thompson D, et al. Operating room briefings: working on the same page. Jt Comm J Qual Patient Saf. 2006;32:351-355.

Spence Laschinger HK, Leiter MP. The impact of nursing work environments on patient safety outcomes: the mediating role of burnout engagement. J Nurs Adm. 2006;36:259-267.

Tucker AL, Spear SJ. Operational failures and interruptions in hospital nursing. Health Serv Res. 2006;41:643-662.

Szekendi MK, Sullivan C, Bobb A, et al. Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care. 2006;15:184-190.

Stelfox HT, Palmisani S, Scurlock C, et al. The "To Err Is Human Report" and the patient safety literature. Qual Saf Health Care. 2006;15:174-178.

Schuerer DJ, Nast PA, Harris CB, et al. A new safety event reporting system improves physician reporting in the surgical intensive care unit. J Am Coll Surg. 2006;202:881-887.

Khatri N, Baveja A, Boren SA, et al. Medical errors and quality of care: from control to commitment. Calif Manage Rev. Spring 2006;48:115-141.

Tucker AL, Spear SJ. Operational failures and interruptions in hospital nursing. Health Serv Res. 2006;41:643-662.

Nuthalapaty FS, Carver AR, Nuthalapaty ES, et al. The perceived impact of duty hour restrictions on the residency environment: a survey of residency program directors. Am J Obstet Gynecol. 2006;194:1556-1562. Epub 2006 Apr 21.


Back to Top

This publication is designed to promote communication among emergency physicians of a basic informational nature only. While ACEP provides the support necessary for these newsletters to be produced, the content is provided by volunteers and is in no way an official ACEP communication. ACEP makes no representations as to the content of this newsletter and does not necessarily endorse the specific content or positions contained therein. ACEP does not purport to provide medical, legal, business, or any other professional guidance in this publication. If expert assistance is needed, the services of a competent professional should be sought. ACEP expressly disclaims all liability in respect to the content, positions, or actions taken or not taken based on any or all the contents of this newsletter.

Feedback
Click here to
send us feedback