Join Section

Quality Improvement and Patient Safety Section Newsletter - September 2011

circle_arrowThe Chair’s Letter - Quality Improvement and Patient Safety Section Newsletter, September 2011
circle_arrowEditor’s Note - Quality Improvement and Patient Safety Section Newsletter, September 2011
circle_arrowAngela Franklin Moves to the National Quality Forum - Quality Improvement and Patient Safety Section Newsletter, September 2011
circle_arrowCall for Nominations for QIPS Secretary!!! - Quality Improvement and Patient Safety Section Newsletter, September 2011
circle_arrow2nd Annual Resident/Fellow Quality Improvement Project QIPS Award - Quality Improvement and Patient Safety Section Newsletter, September 2011
circle_arrowQIPS Tips- The Geriatric Emergency Department - Quality Improvement and Patient Safety Section Newsletter, September 2011
circle_arrowHow to Remain in Compliance With Part 4 of ABEM’s Board Certification Maintenance Process - QIPS Section Newsletter, September 2011
circle_arrowThe State of Emergency Medicine Quality Measures - Quality Improvement and Patient Safety Section Newsletter, September 2011
circle_arrowEngagement Part 2- ED Metrics - Quality Improvement and Patient Safety Section Newsletter, September 2011

The Chair’s Letter - Quality Improvement and Patient Safety Section Newsletter, September 2011

Drew Fuller MD, MPH, FACEP 
Strategic Coordinator for Patient Safety
Emergency Medicine Associates, PA. PC
Germantown, MD    

drewfuller9-11As we wrap up another year, QIPS will continue to seek new ways to provide our members, the College and the profession with resources and opportunities for improving quality and safety in emergency medicine.  

Building on our past accomplishments, the section has continued to bring timely and important topics and discussion to the newsletter, study and publish articles of interest, and provide a venue for discussions and networking at the annual scientific assembly.

Moving forward we will be building a new website to provide improved access to core quality and safety content, timely topics, practical lessons and cases that can be applied on the local level.  This will also allow for improved opportunities for participation enabling our membership to make a contribution.  We will announce the launching of the new site this year and will continue to build it as we move forward and learn more about what our membership needs.

Quick updates since our last newsletter:

  • Welcome to Emily Graham who will be serving as an ACEP liaison until Angela’s replacement has been appointed. Emily is Vice President of Regulatory Affairs with Hart Health Strategies. Prior to that, Emily served as the Assistant Dean and Program Director for Health Information Management at Northern Virginia Community College. Prior to her time in academia, Emily served for seven years as the Associate Director of Regulatory Affairs for the American Society of Cataract and Refractive Surgery (ASCRS), where she focused on Medicare's quality improvement initiatives and advancements in health information technology.  Emily and Dainsworth, our ACEP liaisons, will both be at Scientific Assembly.  
  • Quality Improvement Directors survey has been sent out by a team  led by Elaine Thallner and Azita Hamedani
  • Resident/Fellow Quality Award – Five residents are being recognized for quality improvement projects implemented during their training. (see below)
  • QIPS Endorsement of David John for the ACEP Board.  David is a past chair and one of the founding members of the section. 
  • Procedure Safety in the ED Paper submitted for publication – Jessie Pines and Jack Kelly have headed up the group  (2010 Section Grant)
  • Joint QIPS and Informatics Section group has assembled under the direction of Heather Farley and Kevin Baumlin to develop a whitepaper on quality and safety issues with EDIS in emergency medicine.

Scientific Assembly - QIPS Meeting: Saturday October 15th from 1-3pm    

Proposed Agenda:

  • QIPS introductions & Business meeting
  • Election of Officers
  • Presentation by Dr. Jeremiah Schuur
  • Resident/Fellow QA Project Awards/Presentations
  •  


Editor’s Note - Quality Improvement and Patient Safety Section Newsletter, September 2011

Richard T. Griffey, MD, MPH
Washington University School of Medicine
St. Louis, MO
   

richardgriffeyWe hope to see you at the annual section meeting at the Scientific Assembly 2011 in San Francisco. The QIPS meeting will be held Saturday the 15th from 1-3 in the Hilton Hotel. Room TBA. It will be a time to form new connections renew old ones and learn more about quality and patient safety. Jay Schuur, MD MPP will be the keynote speaker for the meeting and will be speaking about approaches to quality measurement and limitations of utilization-based performance measures and administrative data in measuring quality. 

 

 

 


Angela Franklin Moves to the National Quality Forum - Quality Improvement and Patient Safety Section Newsletter, September 2011

angelaetalThe QIPS section bids farewell to Angela as she has taken a position with the National Quality Forum (NQF). We are all truly grateful for the years of service, counsel, leadership, and friendship she has provided to the members and officers of our section.  We will dearly miss her and wish her well with her new position.   

Although Angela is changing organizations, she will continue to serve the larger house of medicine in her new role as Senior Director for Quality Measures at the NQF, where she will report to Heidi Bossely, Vice President, Performance Measures. Her new position will allow her to leverage her legal, insurance and medical specialty experience from a broader policy-making perspective.   

Since joining the ACEP Washington staff in June 2006 as the Director of Quality and Health IT, Angela has played an integral role in advancing the cause of emergency physicians.   During her time here, she helped promote ACEP’s quality agenda at the AMA’s Physician Consortium for Performance Improvement (PCPI), the Centers for Medicare and Medicaid Services (CMS) and at the NQF.  Within ACEP, she provided strong support and advice to the Quality and Performance Committee and the Sections for Quality Improvement and Patient Safety Section and Emergency Medicine Informatics.  

During her tenure, the QIPS Section was awarded several Section Grants and awards, published articles in Annals, Academic Emergency Medicine and the Joint Commission Journal on Quality and Patient Safety, produced a Quality Course and initiated an annual Resident Award in Quality.  

We are grateful for all she has done and wish her all the best in her new position. 


Call for Nominations for QIPS Secretary!!! - Quality Improvement and Patient Safety Section Newsletter, September 2011

Nominations are now open for the position of QIPS Secretary and Newsletter Editor! Elections will be held at the QIPS meeting at ACEP’s Scientific Assembly in October. This is a tremendous opportunity to work closely with national leaders in quality and safety, cultivate your leadership skills, and potentially even advance to the position of QIPS chair! Please submit your nominations to Dr. Drew Fuller or Dainsworth Chambers  by Oct 15 (self-nominations accepted). Questions regarding the responsibilities of the position should be directed to Dr. Fuller.


2nd Annual Resident/Fellow Quality Improvement Project QIPS Award - Quality Improvement and Patient Safety Section Newsletter, September 2011

We are pleased to announce the recipients of the 2011 QIPS QI Project award. The award is intended to recognize graduating residents and fellows for the development and implementation of a QI project that demonstrates meaningful change in their system. There were many applicants this year and all displayed a remarkable ability to impact quality and safety.

This year’s recipients include:

  • Christian Ross, MD. Indiana University
    Improving Safety of Patient Handoff in the ED: An Interactive Method of End-of-Shift Changeover 
  •  
  • Damien Kinzler, DO. Albert Einstein Medical Center
    Quantifying the Time-to Loop Closure for ED Radiology Discrepancies 
  •  
  • Jenny Chen, MD. Naval Medical Center San Diego
    The Effect of Emergency Department On Site Simulation Based Resuscitation Training On Team Communication 
  •  
  • Jonathan Heidt, MD. Washington University School of Medicine
    Transition of Care in the Emergency Department 
  •  
  • Trushar Naik, MD, MBA. Kings County Hospital
    A Structured Approach to Transforming a Large Public Emergency Department via Lean Methodologies 

This year’s recipients will be recognized at the Scientific Assembly QIPS meeting on October 15th from 1-3pm


QIPS Tips- The Geriatric Emergency Department - Quality Improvement and Patient Safety Section Newsletter, September 2011

Shari Welch, MD, FACEP
Intermountain Institute for
Health Care Delivery Research
Salt Lake City, UT 
 

Shari WelchFor most of the past century the over 65 years age group outpaced the growth of all other age groups in the U.S. and by 2030 one in five Americans will be over 65 years old.  Patients over aged 65 are the highest utilizers of health care services: They will have 6 to 7 healthcare encounters per year, compared with younger adults having only 2. As people age their need for acute health care services increases exponentially and the emergency department (ED) will be impacted disproportionately. Once in the emergency department the elderly are more likely to have urgent or emergent conditions, to be admitted and to require critical care. Gearing up for the Baby Boomers, the largest cohort of healthcare consumers this country has ever seen, may wisely include the emergence of the specialty ED for seniors. 

The Geriatric ED should be different from other healthcare settings in terms of physical space:  Non-glare lighting, large print information dispensation, non-skid flooring, guard rails and hand rails are all features of facility design that are adaptive to senior citizens. Efforts at noise control (it is harder for the elderly to hear when the ambient noise level is high) is also a key feature of an emergency department catering to elderly patients and seniors are more comfortable in rooms with higher ambient temperatures. Since elderly patients will require longer lengths of stay in the ED to sort out their increasingly complex healthcare needs, an ED designed for seniors should have rooms more like inpatient suites with real beds instead of stretchers and space for family members to sit comfortably.  In addition the Geriatric ED should have work space for case managers, social workers and other ancillary personnel that will provide support services which will be critical to keeping patients out of the hospital.  

Borrowing from successful interventions on the inpatient side, some authors have dubbed senior friendly modifications to the ED environment as GEDI’s:  Geriatric Emergency Department Interventions. Some of these include recliners in lieu of stretchers (which can cause pressure ulcers in patients forced to lay on them for many hours), hearing amplification devices, magnifying glasses, telephones with large numbers, clocks and signage with large lettering, aisle lighting, warmer room temperatures, soundproof drapes, egg crate bed padding, and non-skid rubber mats for the patient’s bedside. Some departments are looking at “GEDI packs” distributed in triage to seniors with many of these items in them.  

Many EDs already use patient segmentation to separate pediatric patients, psychiatric patients and minor injury patients from the rest of the mix in the ED.  The special needs of seniors and the new reform models on the horizon are combining to make the development of the Geriatric ED an operational imperative. Shouldn’t this be an option worth considering by your healthcare organization?

 


How to Remain in Compliance With Part 4 of ABEM’s Board Certification Maintenance Process - QIPS Section Newsletter, September 2011

Kevin Klauer, DO, FACEP   

It seems that every time we comply with a regulation someone moves the finish line. Well, don’t blame the American Board of Emergency Medicine (ABEM). ABEM answers to a greater being, the American Board of Medical Specialists (ABMS), who, in 2000, approved a plan to require continuous professional development in their Board certification processes. They named this the “ABMS Maintenance of certification (MOC).” By 2006, all of the specialty Boards that were members of ABMS (e.g. ABEM) had received approval for their MOC plans  and are currently in the implementation phase.   

Although I have heard many of our colleagues curse ABEM and express their frustration over this process, things could actually be worse. Where their infrastructure for providing attestations via their web site can be a bit confusing, the actual requirements for demonstrating the two components of MOC, patient care practice improvement and professionalism and communication, are fairly easy to perform and somewhat eloquent. In fact, compared to the complexity of the plans devised by other specialties, we should be thanking ABEM for creating a workable solution.  

To maintain your ABEM Board certification, you must fulfill the four components of “Continuous learning” set forth by the ABMS. The first is licensure and professional standing. If you have an unrestricted license, you’ve passed step one. Step two is Lifelong learning and self-assessment (LLSA). If you keep up with your LLSA articles and take the annual, open book quiz, you’re half way there.  Remember to keep track of how many you will be required to take to qualify for your Concert exam. If you take one less than required, you’ll have to take the initial qualifying examination (formerly known as the initial certification examination); complete two less and you get to start all over again, taking the qualifying examination and your oral boards. Step three is termed “Cognitive expertise.” ABMS refers to this as demonstrating your specialty-specific skills and knowledge. This is assessed by taking your ConCert examination. Finally, part four is assessment of practice performance (APP): Demonstrating your use of best evidence and practices compared to peers and national benchmarks.  We have grown to accept parts one, two and three. So, what is the magic to complying with four?   

First, when do we have to comply? Some confusion was created last January when ABEM sent out notices announcing the process for complying with part four, APP.  Although they reported attestations would be accepted in 2010, this requirement actually doesn’t begin until 2011. During each ABEM Diplomate’s ten-year certification cycle, two attestations must be made, regarding the patient care practice improvement component. These attestations must be made by, but not after, years four and eight, while complying with the professionalism and communication component only requires one by year eight. So, what is an attestation? An attestation is simply a statement affirming you have complied with the requirement.   

The required components of the attestations, as you navigate from one screen to the next, include naming the program you participated in, listing the dates of program involvement (make certain the dates submitted within and not outside of the attestation period), reporting the location of the program, including whether it was local, regional or national and answering a series of questions, in drop down boxes and toggle buttons, asking specific questions pertaining to the program goals and design.  

How does ABEM define a practice improvement (PI) activity and what exactly are we attesting to? Well, here is exactly what they say:

“A PI activity must include the following four steps: 

1. Review patient clinical care data from ten of your patients.  The data must be related to a single presentation, disease, or clinical care process that is part of the Model of the Clinical Practice of   Emergency Medicine (EM Model) for example

  • clinical care processes
  • feedback from patients that relates to the clinical care given
  • outcomes of clinical care
  • access to care such as time for through-put, left without being seen, etc.
  • Group data and data collected through a national, regional, or local practice improvement program in which you participate is acceptable.

2. Compare the data to evidence-based guidelines.  Evidence-based guidelines are based on published research subject to peer-review.  Only if such guidelines are not available, you may use guidelines set by expert consensus or comparable peer data.  Guidelines set by expert consensus are published, accepted, national standards, and guidelines set by peer data are set by individuals who practice in like or similar circumstances.

3. Develop and implement a plan to improve the practice issue measured in Step #1.  You may plan for an individual or group improvement effort. 

4. After implementing the improvement plan, review patient clinical care data from ten additional patients with the same presentation, disease, or clinical process as the first patient data review.  Use this data to evaluate whether clinical performance has been improved or maintained.”

Resist the temptation to over think this. Quite honestly, almost every ED in the country performs some form of acceptable PI work. One of the easiest and ubiquitous is the assessment of core measures. Remember when you hated them? Blood cultures prior to antibiotics for pneumonia. Antibiotics within 6 hours of presentation, etc., etc. Now, your hard work and the data your hospital and/or group have been collecting will actually serve a purpose for your benefit. This is just one example. However, many, and probably most, ED quality initiatives will meet the above requirements.  This includes operational assessment of performance metrics such as door to doctor, patients left without being seen, etc.

Is anyone’s patient satisfaction being assessed? If so, you have probably already complied with the professionalism and communication component (below is ABEM’s description of this component)? The attestation addresses similar verification questions as did the PI component. The following is an example of this from ABEM’s web site.

“The communication / professionalism activity is designed to help ensure that diplomates communicate with patients in an effective and professional manner.  You may use any formal method of assessing communication skills including patient surveys, interviews, or focus groups, administered at the institutional, departmental, or individual level.  At least ten of your own patients must be included.  A minimum of one physician behavior must be measured from each of the following three categories:

1. Communications/listening, for example:

  • Communicate clearly with patients and other medical staff by listening carefully and couching language at the appropriate level for the listener

2. Providing information, for example:

  • Explain the clinical impression and anticipated management course to the patient and the patient’s family
  • Provide information about tests and procedures
  • Give the patient options

3. Showing concern for the patient, for example:

  • Show respect to the patient and other medical staff
  • Make the patient feel comfortable by asking if they have any questions or concerns and act to address their concerns
  • Ask the patient about adequate pain relief”

You’re almost done. However, with most attestations, there is an audit mechanism to verify actual compliance. Near the end of your attestation, you will be asked to name a verifier, the person who will be available to verify the information you have provided in your attestation. This could be your Quality Director, Medical Director or hospital Chief Medical Officer. However, make certain that individual will accept this responsibility and if they leave, that this responsibility is reliably forwarded to their successor. ABEM will audit 10% of attestations.

There is one exemption to this process. If you are not clinically active for any reason, you can fulfill APP and maintain your board certification by changing your status from clinically active to clinically inactive.  When you become clinically active again, you can switch your status back, and the APP requirements will be applied to your web page. There is no penalty for changing your status and this does not negatively impact your board certification status. Just log in; click the EMCC online link and then the large rectangular “Assessment of Practice Performance” icon. The top of the next screen will ask you if you want to change your status and the bottom of the page will show your APP attestation requirements.

Although the ABEM APP online process is not particularly intuitive, with a little background information, it shouldn’t take you long at all. I hope this guidance will help you navigate your way to a successful attestation.


The State of Emergency Medicine Quality Measures - Quality Improvement and Patient Safety Section Newsletter, September 2011

Dickson Cheung, Jennifer Wiler, Richard Newell and Jay Brenner  

This article aims to provide background on quality measure reporting and reimbursement programs as well as to update readers on the current, future and disabled quality measures relevant to the practice of emergency medicine.    

The Centers for Medicare and Medicaid Services (CMS) remain the dominant player in the determination of how hospitals and providers are reimbursed with respect to quality.  They direct their influence through three main programs: and the Physician Quality Reporting System (PQRS), the Outpatient Prospective Payment System (OPPS) and the Inpatient Prospective Payment System (IPPS).  While hospitals are responsible for reporting the "core measures" (i.e. OPPS and IPPS programs), providers are responsible for reporting PQRS measures via claims through their billing companies.  The OPPS and IPPS apply to all patients regardless of payer with admitted patients reported via the IPPS and discharged/transferred patients via the OPPS.  PQRS measures include admitted and discharged Medicare Part B patients only.   

Physician Quality Reporting System (PQRS) 

The 2006 Tax Relief and Health Care Act (TRHCA) required the establishment of a physician quality reporting system, including an incentive payment for eligible professionals who satisfactorily report data on quality measures for covered professional services furnished to Medicare beneficiaries beginning in the 2007 reporting period.  This CMS program has been formerly known as the Physician Quality Reporting Initiative (PQRI) or the Pay for Performance (P4P) program.  In 2011, a name change occurred and the program is now known as the Physician Quality Reporting System (PQRS) to denote that it is no longer a pilot but rather an established program.    

Provider-based measures largely originate from the AMA-PCPI (Physician Consortium on Performance Improvement, convened by the American Medical Association).  This year, there are no additional measures that affect emergency medicine.  Current PQRS measures are listed below.  The new ED relevant PQRS measures for 2011 are measures 91-93.   

pqrs measures 

In addition, a number of proposed additional PQRS are now being considered for future implementation. 

proposedpqrsmeasures 

The schedule for additional financial incentives and penalties for satisfactorily reporting PQRS measures are outlined below.   

Physician Quality Reporting System (PQRS) 

bonusorpenalty 

Also, beginning in 2011, physicians will have the opportunity to earn an additional incentive of 0.5% by working with a Maintenance of Certification (MOC) entity and by 1) satisfactorily submitting data on quality measures under PQRS for a 12-month reporting period either as an individual physician or as a member of a selected group practice AND by 2) participating in a MOC Program and successfully completing a qualified MOC Program practice assessment.   

Outpatient Prospective Payment System (OPPS)   

Hospital measures can originate from individuals, professional societies, academic institutions and more recently, consulting agencies e.g. Optimal Solutions Group and Ingenix.  The bulk of hospital measures that affect emergency medicine come from the OPPS and its associated data reporting program, the Hospital Outpatient Quality Reporting Program (OQR).  The Hospital OQR was mandated by the Tax Relief and Health Care Act of 2006, which requires subsection (d) hospitals to submit data on measures on the quality of care furnished by hospitals in outpatient settings.  To receive the full Annual Payment Update (APU) under the OPPS, hospitals must meet administrative, data collection and submission, and data validation requirements of the Hospital OQR.  Hospitals that fail to successfully participate in the OQR receive reduced payments through a reduction of 2.0 percentage points to the hospital market basket update.   

The proposed OPPS rule for 2011 does not have any changes to the current 11 outpatient quality measurements.  However, CMS has added 16 additional quality reporting measures across 7 different clinical areas in 2012.  The four that may directly affect emergency medicine are listed below.  OP-13 through OP-15 has already been subjected to CMS dry runs resulting in reports delivered to applicable hospitals in April 2011.  

hospitaloutpatient 

Also, CMS is proposing to not implement national coding guidelines for ED visits.  In the most recent proposed rule, CMS indicates that implementing a national system posed significant complexities, and data submitted by hospitals over the past several years appeared to be reasonable and did not warrant implementation of such a system.  CMS did point out, however, that it would continue to monitor hospital ED OP service levels and reevaluate implementing national guidelines on a going-forward basis.   

Inpatient Prospective Payment System (IPPS)   

Similarly, the Inpatient Prospective Payment System (IPPS) describes the Hospital Inpatient Quality Reporting Program (IQR), formerly known as the Reporting Hospital Quality Data for Annual Payment Update (RHQDAPU) Program.  RHQDAPU was originally mandated by Section 501(b) of the Medicare Prescription Drug, Improvement, and Modernization Act (MMA) of 2003.  This section authorized CMS to pay hospitals that successfully report designated quality measures a higher annual update to their payment rates.  Alternatively stated, reporting hospitals escaped a reduction in payment rates for failure to comply.  Initially, it was a 0.4 percentage point reduction in the annual market basket for hospitals that do not successfully report but the Deficit Reduction Act of 2005 increased that reduction to 2.0 percentage points.   

The main changes for 2012 that affect emergency medicine include the removal of AMI-1: Aspirin at Arrival from the list because it was felt that the vast majority of hospitals now perform well on this measure and the measure has out served its purpose.  Also, PN-5c: Initial Antibiotic Received within 6 Hours of Arrival will be removed.  In 2014, both the median time from arrival to departure for admitted patients and the median time from admit decision to departure for admitted patients (i.e. the boarding measure) will take effect.  However, if hospitals desire to receive “meaningful use” incentives starting in 2011 according to the HITECH act, they will need to begin reporting these measures immediately. 

hospitalinpatient   

National Quality Forum (NQF) 
 
Historically, the final common pathway for quality measure endorsement has been approval by a voluntary consensus standards-setting organization which CMS has deemed necessary for inclusion into the IPPS and OPPS programs.  The National Quality Forum (NQF) has become the de facto quality measure endorsement organization.  In addition, CMS contracts with NQF to identify and vet certain measure sets.  The table below lists the NQF endorsed measures in both Phase 1 and Phase 2 of the Voluntary Consensus Standards for Ambulatory Care that mainly affects emergency care, the most recent phase being adopted in January, 2011. 

endorsedmeasures   

One particularly controversial measure worth noting is OP-15 or Use of Brain Computed Tomography (CT) in the Emergency Department for Atraumatic Headache.  OP-15 will be the first measure that has been fast tracked for inclusion into the OPPS program despite NQF rejection.  ACEP through its Quality and Performance Committee has sent comments challenging this unprecedented path as well as the validity of this measure.  A coordinated national study is nearly complete to evaluate how the OP-15 measure derived from administrative claims data compares to actual clinical data derived from chart review as well as compliance with other established guidelines for CT utilization in atraumatic headache. 
 
Another group of quality measures that may soon affect emergency medicine reimbursement involves the new Episode of Care (EOC) prototype.  Recent passage of the Patient Protection and Affordable Care Act (ACA) legislation in 2010 includes methods to develop and test novel models of healthcare delivery and payment reform.  The goal is to reduce costs by eliminating waste, and improve patient health outcomes by aligning provider and hospital incentives.  The proposed reforms are based on “value-based purchasing” paradigm, rather than the current fee-for-service payment system which reimburses providers and institutions based on the volume of services provided.  The mandated development of global payment systems reimburses both hospitals and providers for the complete management of a patient over a defined period of time. 

Episodes of Care describe a patient’s complete interaction with the healthcare system for an illness or procedure over a defined period of time.  A bundled or global payment is created for each episode based on a fixed set of anticipated healthcare resources needed to treat the patient.  Episodes have a set of accompanying quality metrics and/or clinical guidelines with financial incentives (or penalties) based on performance.  Episodes attempt to look at all clinically related services for a discrete condition over a given time frame and across the entire continuum of care, including evaluation and management, surgery, ancillary, lab, and pharmacy services.  At least five organizations are currently involved in episode grouper development, with each organization employing a different focus and methods. CMS has solicited input from various stakeholders to develop pilot episodes.  Private groups including Prometheus (Provider Payment Reform for Outcomes, Margins, Transparency, Hassle-reduction, Excellence, Understandability, and Sustainability) Payment® Model and the American Board of Medical Specialties (ABMS) are also working on their own episode groupers.  Common chronic diseases are the most popular focus of episode pilot projects includingdiabetes, MI, CAD, CHF, COPD, asthma and pneumonia asthma care.  The role of emergency care in episodes has yet to be fully described but it is likely that there will be a fee for service exception for emergency care.     

The research and education foundation of the American Board of Medical Specialties (ABMS-REF) and the Brookings Institution working under a grant from the Robert Wood Johnson Foundation has recently developed 22 separate measure specifications spanning 12 high-impact conditions.  In March of 2011, 18 of the 22 measures were submitted to the National Quality Forum.  The final measures have been posted to the website of the Quality Alliance Steering Committee (QASC) and will be reviewed by NQF in two cycles: cardiovascular care and diabetes, and then pulmonary conditions along with miscellaneous conditions. 

 episodesofcare 

Lastly, NQF is in the early stages of developing palliative care and regionalized emergency care measures.  A group spearheaded by the University of North Carolina developed a white paper on regionalized emergency care for the NQF.  Comments from ACEP were submitted in August, 2011 during the solicitation period.  A steering committee for palliative and end-of-life care was gathered at the end of July 2011 to discuss a set of 12 new measures that may affect emergency medicine.  

 Conclusion   

Quality measures continue to exert considerable influence on the practice and reimbursement of emergency care.  While provider-based measures mainly focus on the clinical care of specific medical conditions, the overwhelming share of upcoming hospital-based measures address ED throughput, timeliness of care and imaging utilization issues.  The passage of Episodes of Care measures constructed on a value-based purchasing and bundled payment paradigm are imminent.   

This document is considered current as of September 7, 2011. 

 


Engagement Part 2- ED Metrics - Quality Improvement and Patient Safety Section Newsletter, September 2011

Engagement Part 2 - ED Metrics- An Effective Strategy to Kill Motivation
Mark Jaben, MD 

markjabenIn part one, I tried to advance the observation that RESULTS <-- LEARNING <--- ENGAGEMENT <-- RELATIONSHIPS. A relationship is based on mutually beneficial goals, attainable by understanding what each party believes they need to be successful in their work. This creates a foundation for engagement, which is essential to set the stage for the learning that leads to results.

Too often, we jump straight to the results without regard for the essential prerequisites and then wonder why people just don’t get it or won’t do it. In our zeal, we create metrics that seem appropriate, but rather than assist in promoting the engagement we need, these inadvertently do no more than enforce compliance, where people just do the bare minimum not to get punished and are not willing to invest themselves.

Here are some examples.

1) LOS (length of stay) - the anti Lean metric
What could be more Lean than lead time measurement, until you ask the question: lead time for what? How do I know where to start to address a 30 minute increase in LOS last month?

In emergency medicine, there are really three kinds of patients:  

a. Straightforward- few possible considerations, all of low risk; the question to be answered and the path for evaluation are clear and involve few possible tests. Such patients might have an ankle injury, laceration, urinary symptoms without abdominal pain, sore throat

b. Complicated- few possible considerations but some of higher risk and many more options of tests that might help guide care, but the question to be answered and the path for evaluation remain clear, such as patients with chest pain, SOB in a COPD patient, abrupt onset of focal neurologic symptoms, headache

c. Complex- the question to be answered and the path forward are unclear, with low or high risk potential, and requiring extensive testing and care to figure out the correct path, such as abdominal pain in a young woman or in the elderly, general weakness, dizziness.

Each category requires different resources, different staff, and different amounts of time to make decisions that are safe and appropriate, yet ED’s care for all three at the same time. Unless you look at lead time for each separately, how can you know where to act to achieve improvement? Moreover, when the measurement is reported days or weeks after the fact, the particular circumstances that led to the increase in LOS have been lost, making it even more difficult to figure out what to do. And if you don’t understand this measurement within the expected statistical variation that any measurement will experience, you might be misled into acting not only in ways that don’t address the real cause, you might act when there is no real reason to act.

2) Patient satisfaction-
A score of 95% would make most institutions pleased, but it really means 1/20 patients are not satisfied. If an ED sees 20,000 patients a year, roughly 60 patients a day, that means 3 people a day are dissatisfied! If your ED is busier, you can do the math yourself. Now 19/20 does seem pretty good, but often this means that a squeaky wheel gets all the attention. Once a prominent person complains, we focus there, not on the other 95% and miss the opportunity to ask what about our current care process is not working for everyone?

Patient satisfaction is important because it connects our work to what matters for patients. But, like average ED LOS, it is impacted by way too many people and processes to serve as a guide to what needs to be done for improvement. And like LOS, to be useful it must be broken down into the component pieces that can then be analyzed and acted upon. The real task here is to understand just what the component pieces are that must be in place to achieve patient satisfaction, a discussion that must begin with learning patient needs and perceptions and then involving everyone with a hand in the process to learn how best to educate patients to the realities of what can be done and what should be done, while reconciling with them how best to serve those needs and deciding together what will be done.

3) Triage in 15 min 80% of the time
Despite being an attempt to give people a break, this actually blames people. Why? Because it says we acknowledge there are circumstances out of your control, so we won’t hold you accountable. Quality is an inductive process- we look at an outlier event and try to draw a conclusion as to what led to this happening. Operations are deductive; we start from the principles of how to make things flow and then design the circumstances to conform with this. We should be asking how the process could accommodate all the circumstances all the time. 

‘Holding people accountable’ is really code speak for forcing someone to do what you want. This ignores the reality that your solution, in all likelihood, makes it harder for that person to be successful in their work, i.e., it just does not work for them. A problem is like a prism with each facet only visible to one person. To fully realize the true extent of the problem requires learning each of its facets. Crafting a solution that actually responds to the real issue and that can be acceptably implemented relies on this knowledge. ‘Holding people accountable,’ that is managing for compliance, ignores the reality that people decide how accountable and involved they want to be. Managing for engagement, not compliance, creates the atmosphere where people are willing to invest themselves in the creative, conceptual, decision making work required to provide high quality healthcare.

4) ‘The ED is not profitable’ and ‘Our ED is the doorway to the hospital’
We mislead ourselves when financial information is used to make operational decisions. So often, current accounting methods isolate areas financially and obscure the reality that those areas are quite interconnected operationally. Like LOS and patient satisfaction, financial results are a performance measurement, the consequence of how the operation is designed, and should be used to drive a search for how to improve operations. This is done by increasing capacity, not decreasing the ability to get the work done well. This increased capacity allows for decisions about how to best use capital, by either increasing resources to increase capacity further, leading to increasing volumes and margin, or accomplishing more with the same resources, resulting in cost savings. This takes a longer term view of what financial success means and rightly focuses attention on what it takes to get the work done well.

These are close cousins to ‘People are our greatest asset.’ If this is so, why are salaries for staff listed as an expense rather than a fixed asset? Reducing training and cutting staff is so often the default response to achieve short term cost saving. These are prime examples of using financial results to make operational decisions that decrease capacity and have the unintended consequence of making it harder to get work done well. Acknowledging the interplay between financial results and operational decisions is crucial for successful healthcare delivery.

5) Door to triage time; door to doc time- ‘better hustle to see those patients’
Contrary to popular wisdom, patients don’t come to the ED to ‘get triaged’ or to ‘see’ the doctor. They come to get advice and treatment about their problem. They come to have their uncertainties addressed. These measurements carry the risk of optimizing a part of the process at the expense of the entire process. These should be used as learning measures, not as performance measures, to help understand the obstacles in the downstream process, where the road block really exists.

6) Patients per hour/provider-
Although this seems a reasonable measure of productivity, it actually detracts from promoting the kind of collaboration needed for improvement to take hold and rewards thinking isolated to ‘me,’ rather than to ‘us.’ ‘Oh, I’m above average; I’m OK. Poor guy over there is always so slow; he makes my job harder.” Furthermore, people can hustle a little, but beyond a certain level, this only invites mistake, error and frustration, as people work beyond their comfortable capacity.

Measuring patients per hour by provider really measures the patients per hour for the department at that time. Some might say that the other variables are controlled by averaging across the number of shifts each provider works. It is very difficult to isolate these other variables, and although this assumption is convenient, this may be true, but it may not be true, as well. More importantly, what we need are people willing to critically look at their individual work flow and how this contributes to the system. If people are being held accountable for things out of their control, they will not be willing to engage in this critical refection and learning.

Until we design incentives and rewards that focus on this as the measure of success for the department and for individuals working in the department, it is unlikely people will be comfortable or willing to look critically at their contributions individually, how their work impacts those they work with, and how their work raises or lowers the comfortable capacity of the department. This will more likely happen when individuals are willing to assess and acknowledge the difficulties they are having, and the system commits itself to supporting their efforts to learn how to overcome these obstacles.

We don’t engage people by judging them on operational metrics. Judge the process by operational metrics, not people. Judge people by whether they can apply the skills, learning, and decision making to fulfill the parameters of success, those identified as necessary to do the work well. These become the standards against which performance can be judged. Self assessment of current performance against these criteria requires appropriate feedback. Current operational metrics include much that is out of the control of individuals, so they just won’t believe in these as reasonable feedback.

People know where they have difficulties. They just are rarely placed in situations where they can comfortably acknowledge these to themselves or to those in supervisory positions. Engaging people to judge their own performance takes advantage of everyone’s interest to be better at what they do. Training to the gaps and feedback make people your greatest asset.

In part 3, let’s consider how we could manage for engagement.
 


Feedback
Click here to
send us feedback