ACEP ID:

Benchmarking in Emergency Medicine

"We have a very poor understanding in this country of what individuals are doing who practice medicine. We have individuals who will tell you what you should do, but none who will tell you what is actually being done." Gregory L. Henry, MD, FACEP (ED Management, 1996)

Benchmarking is the comparison of a performance or a process to the work or results (benchmarks) of others. The Xerox Corporation defines it as "The continuous process of measuring our products, services and practices against our toughest competitors or those companies known as leaders" (Camp, 1995 p18). The intent of benchmarking is to learn and improve. The term is often used to denote that the work is being compared to the best in the field, "the best practice." (Espinosa and Kosnik, 1985). Benchmarks, on the other hand, are industry standards and may describe a practice or may quantify the practice.

Benchmarking efforts targeted at specific concerns, such as the desire to improve an error rate or specific-cycle times, are known as problem-based benchmarking. This focus on a specific problem and the search for its solution was the early impetus to the benchmarking movement. However, a more recent approach looks at benchmarking as a tool in the continuous improvement of key processes. Called process-based benchmarking, this is the focus of the current Quality Improvement (QI) movement in health care. Fueled by requirements from the JCAHO in its emphasis on improving organizational performance, the QI movement requires the establishment of benchmarks in order to determine if quality has improved, and in comparison to what.

According to Camp, benchmarking serves four main functions:

  1. To analyze the operation: "Benchmarking firms must assess the strengths and weaknesses of their current work processes, analyze critical cost components, consider customer complaints, spot areas for improvement and cycle time reduction and find ways to reduce errors and defects or to increase asset turns."
  2. To become knowledgeable about the competition and industry leaders: "Benchmarking firms must find out who is the best of the best."
  3. To incorporate the best of the best: "Benchmarking firms must learn from leaders, uncover where they are going, learn from the leader's superior practices and why they work, and emulate the best practices."
  4. To gain superiority: "Benchmarking firms must try to become the new benchmark."

The four commonly recognized types of benchmarking are functional, generic, competitive, and internal, with the latter two being most useful in the health care industry. DeToro defines functional benchmarking as "comparing your work processes and outputs with non-competing companies outside your industry who are reputed to be the best at what they do." The "investigation of the work processes of others who have innovative, exemplar work processes" is how Camp defines generic benchmarking. Competitive benchmarking, as the name implies, entails looking at your competitors; "comparing your work practices with those of your best competitors" (DeToro, 1991). Internal benchmarking entails "comparing yourself to groups within your company that perform similar work and who may be a source of superior practice" (DeToro, 1991). Internal benchmarking can also refer to the comparison of current practice to previous performance. Some health care writers, especially some authorities on statistical process control, see significant and essential value in this type of internal benchmarking. JCAHO's quest for quality improvement is focused on internal and competitive benchmarking as well.

There are advantages and disadvantages to the various benchmarking types. DeToro has summarized these particularly well. (DeToro, 1991)
functional leaders
   Pros: usually are not hard to identify
   Cons: practices may not be directly comparable
generic benchmarking
   Pros: often sources of innovative practices
   Cons: requires more creativity to find a fit for generic best practices in your processes
direct competitors
   Pros: same industry, similar products
   Cons: may not be best in practice in your functional area, problems with availability of data, confidentiality of data
internal operations
   Pros: easily accessed, may be low cost to study, no ethical problems, no confidentiality problems
   Cons: may not be "best practice", data may not be easily accessed

According to an emergency medicine research agenda developed jointly by ACEP and SAEM, "new research methods are needed to assess health care outcomes, quality of care, and costs in terms of human suffering, anxiety, quality of life, and disability. It will take time and effort to identify and apply valid and reproducible measures of pain, anxiety, quality of life, functional status, disability, and satisfaction with emergency care. Better measures of emergency care costs, cost-effectiveness, and cost-benefit are also needed. Assessment of health care cost must take into account the ability of the population to access the existing health care system and cost of delays in health care." (Aghababian et.al. 1996)

Several factors within the health care environment driving the benchmarking movement are described below.

Cost containment/resource management
With the growing pressures to reduce health care costs and spending, benchmarking offers a way to evaluate practices and adopt those measures that can be shown to be most effective. The hypothesis is that best practices, if adopted widely, would lead to increased efficiencies in emergency medical systems.

Competition/value-based decisions by consumers of health care
From individuals to managed care organizations, consumers of health care now exert a powerful influence on the practice of emergency medicine. Increased efficiencies in core emergency processes would benefit not only the patients of an individual department, but have the potential for supporting emergency medicine as a field.

Geographic variation
Data presented by John Wennberg and his colleagues show that treatment methodologies and resource utilization vary greatly in different areas of the country (Wennberg, 1973; Wennberg, 1987; Wennberg, 1989). This geographic variation can be unsettling. We in emergency medicine are not immune to this phenomenon. How much variation is there in the emergency medicine delivery system? What are the best clinical practices? Benchmarking can help answer these questions and promote best practices, both nationally and internationally.

Increasing power of computers/increasing sophistication of databases
Computer technology is influencing society in a way that is at least as powerful in magnitude as the invention of the printing press. We now have the capacity to create databases that could give us the information required to better understand how emergency medicine is practiced, how well it is practiced, and what constitutes "best practice."

While the above arguments can be made for an aggressive benchmarking effort, there are perils to diving into this work too quickly. A haphazard attempt could end up on the rocks. Some of the problems in benchmarking are outlined below.

Risk and severity-adjustment of data
As with the outcomes movement, many sorts of data may require risk and/or severity adjustment. The following factors may influence benchmarks and the benchmarking process:

  • Age
  • Sex
  • Acute clinical stability
  • Principal diagnosis
  • Severity of principal diagnosis
  • Extent and severity of co-morbidities
  • Physical functional status
  • Psychological, cognitive, and psychosocial functioning
  • Health status and quality of life
  • Patient attitudes and preferences for outcomes

Methodology concerns
Many issues of methodology have to be addressed before even beginning on a project:

  • Which benchmarks to study
  • Units of analysis to study
  • Condition specific vs. generic risk adjustment
  • Time frame or window of observation
  • Timing of collection of risk factors
  • Treatment of missing data
  • Validity of data
  • Reliability of data

Quality /cost: concerns that quality of care will suffer as costs are contained
As Kassirer has noted, "underlying the expectation that physicians, health care plans and other organizations will soon be competing on the basis of the quality as well as the cost of medical care is the fundamental assumption that we know what quality is, how to measure it, monitor it and ensure it. This is a proposition devoutly to be wished, one to which millions of dollars will be devoted. But how accurate it it?" (Kassirer, 1993) The field of benchmarking in health care is not immune to such criticisms and fears.
Data versus information
Computing power and large databases are meaningless to the practitioner unless the data they generate has meaning. A large "naturalistic" database may be interesting, but will it provide data that moves us in the appropriate direction? Relman has expressed concerns that insufficient epidemiologic science will render such databases less that helpful. He argues that in many ways, "no data may be better than bad data. At least with no data, you can say that you don't know" (Nash and Relman, 1995).
Scientific criticisms/limitations
A whole host of scientific criticisms have been targeted at outcomes data. "Much of this data is so poor in study design that a trained epidemiologist would throw up his hands and say that he or she cannot draw any conclusions at all." (Nash and Relman, 1995) These criticisms can also be leveled at benchmarking activities. Methodologies for benchmarking work in emergency medicine will require thought and science.
Who will pay for benchmarking activities/benchmarked data?
Dr. Relman has raised concerns that managed care organizations, although eager consumers of outcomes and benchmarking research, have not expressed interest in funding medical outcomes research that is not primarily linked to cost-savings. (Relman, 1995)
Attachment to traditional physiological parameters
"The ball game in outcomes research is changing with the changes in health care reform that we are undergoing. We are focusing on different things than we used to focus on. We are no longer purely interested in biological measures and physiological measures; we are interested in functional health status measures and quality of life measures." (Starfield, 1995) This problem, too, extends to the benchmarking movement.

After a review of the pros and cons of developing benchmarks, the questions of how to do it remains. There are a surprising number of descriptions of the benchmarking process in the industrial literature. Camp and Tweet have made efforts to adapt benchmarking to health care, and have used the ten-step model, often known as the "Xerox model." (Camp and Tweet, 1994)

The Xerox model is comprised of ten steps, divided into four phases. These phases are planning, analysis, integration, and action.

Planning(three steps)
   Select a subject to benchmark.
   Identify the best practitioners.
   Determine the data-collection method and collect the data.
Analysis(two steps)
   Determine the current gap
   Project future performance
Integration(two steps)
   Communicate the results of analysis
   Establish functional goals
Action(three steps)
   Develop action plans
   Implement plan and monitor results
   Recalibrate benchmark

Many hospitals have begun internal benchmarking in a small way. However, several larger-scope projects have also begun to collect data on quality in emergency medicine across hospital boundaries, "competitive" benchmarking.
The Midwest Benchmarking Alliance Group
A recent article in ED Management describes the work of the Midwest Benchmarking Alliance Group, an emergency medicine benchmarking group. In an effort to address the issues of cost containment and service improvement in a very competitive environment, the Midwest Benchmarking Alliance Group was formed as part of a collaborative commitment to excellence. The benchmarking group, which meets quarterly, is made up of one or more members of emergency department leadership from nine mid-western hospitals.

Several projects involve the acquisition of benchmarks and identification of best practices. Out-of-industry experts have been invited to address the group. For example, Samuel Kiehl, MD, FACEP, of Riverside Methodist notes that "Marriot and Disney do a really good job with client satisfaction, so we're looking at having them come talk to us." (ED Management, 1996) According to this article, the group has examined such issues as:

  • Patient discharge: Because group members felt that not enough attention was paid to the details of the discharge process, a project to "define the components of the process and then determine best practices" was developed.
  • Personnel costs: This project was created to determine the actual costs of delivering emergency health care per patient based on personnel data.
  • Productivity: A productivity/utilization index will be developed to define what constitutes an efficient physician.
  • Patient satisfaction:After finding that existing patient satisfaction tools were not very useful for EDs, the group will develop its own patient satisfaction evaluation tool.
  • Information gathering: Various computer programs will be evaluated for effectiveness in the ED.
  • Clinical issues: Clinical, patient management issues and protocols of the group are shared.
  • Operational issues: The group is looking to reduce overall cycle times, such as reduced through-put times, as well as the times for such processes as registration and triage.

The ED Management article points out that ground rules are essential to the group's progress.

  • Confidentiality: An informal confidentiality agreement is honored by all members of the group. Dr. Henry noted that "we never reference anybody else's data that was brought to the table. We don't expect our members to broadcast this data or use it in a commercial manner."
  • Members pull their own weight: Active participation by the members is expected. Members are expected to attend meetings and complete assigned tasks.
  • Always bring something to the table: Members of the group bring not only data, but specific areas of expertise as well. These areas include a knowledge of reimbursement issues, informatics, and practice management.
  • "Nothing is proprietary": Members are not in competition with each other geographically. However, the group is considering development of a for-profit arm.

Members of this group have benefitted from their participation in many ways. For example:

  • Use of cost data from the benchmarking group to justify staffing.
  • Shared educational interventions. For example, Fairfax Hospital will present its patient satisfaction course at a future meeting.
  • Use of cost data from the benchmarking group in negotiations with managed care.


ACEP Section Grant Award -- Benchmarking Current Standards of Emergency Medicine Practice and Practice Management
ACEP's Continuous Quality Improvement Section has been awarded a College Section Grant to benchmark various aspects of emergency medicine practice. According to Peter M. Fahrney, MD, FACEP, Section Chair, the project will collect a statistically valid sample of actual and current parameters of the daily practice of emergency medicine in varied hospitals and geographical settings. The study will include various time studies related to evaluation and treatment, tests ordered, consultants contacted, returns after discharge, and X-rays and ECGs read by the emergency physician. The data collected will be coded and will remain anonymous. Data will be stratified by the size and setting of the hospital. It is hoped that at least one hundred hospitals will participate in this massive project (Section on CQI Newsletter, 1996). It is Dr. Fahrney's vision that such data can be used as a tool in quality improvement, and should establish ACEP as a resource leader in this field.

Benchmarking, with its potential to identify best practices, will not soon fade from the health care management scene. The study of what works in emergency medicine, specifically, what works best, costs least, and leaves the patient satisfied, is essential if emergency medicine is to argue its case in the trial for health care dollars. While it is not a process to enter into lightly, with careful consideration and a commitment to the process, the results may pave the way for the practice of emergency medicine in the next century.

March, 1997

BIBLIOGRAPHY

ACEP News. Section grant awards announced. July 1996; 15: 21.
ACEP. Statement of Direction. Approved April 6, 1995.
Aghababian E, et al. Research directions in emergency medicine. Annals of Emergency Medicine. 1996; 27:339-342.
Alsever R, et al. Developing a hospital report card to demonstrate value in healthcare. JHQ. 1995; 17:19-28.
Barret L, et al. Electronic patient information may provide needed help for outcomes management. JCOM 1995;2:59-64.
Batalden PB, et al. Linking outcomes measurement to continual improvement: the serial "V" way of thinking about improving clinical care. Journal on Quality Improvement. 1994; 20: 167-179.
Batalden PB. Building knowledge for quality improvement in health care: an introductory glossary. JQA. 1991; 13: 8-11.
Bieze J. Who will pay the bill for outcomes research? Diagnostic Imaging. July, 1994; 47-59.
Brook RH & Lohr KN: Efficiency, effectiveness, variations and quality: boundary crossing research. Medical Care 1985;23:711.
Bull MJ. in Meisenheimer CG. Quality Assurance: A complete Guide to Effective Programs. Aspen Publishers Inc.: Rockville, Maryland. 1985.
Camp RC. Business Process Benchmarking. Finding and Implementing the Best Practices. Quality Press. Milwaukee, WI 1995.
Camp RC. The Search for Industry Best Practices that Lead to Superior Performance. Quality Press. Milwaukee, WI 1989.
Camp RC, Tweet AG. Benchmarking Applied to Healthcare. Journal on Quality Improvement. 1994; 20:229-238.
Carey TS, et al. The outcomes and costs of care for acute low back pain among patients seen by primary care practitioners, chiropractors, and orthopedic surgeons. The New England Journal of Medicine. 1995; 333: 913-917.
Carlin E, et al. Using continuous quality improvement tools to improve pediatric immunization rates. Journal on Quality Improvement. 1996; 22: 277-287.
Champy J. Reengineering Management. The Mandate for New Leadership. Harper Business 1995.
Chessare JB. Evidence-based medical education: the missing variable in the quality improvement equation. Journal on Quality Improvement. 1996; 22:289-291.
Dagher M, et al. Managing negative outcome by reducing variances in the emergency department. QRB. 1991; 15-21.
Damelio R. The Basics of Benchmarking. Quality Resources. New York. 1995.
Davidow S. Developing and implementing health care guidelines for group medical practices: an interview with Gordon Mosser. Journal on Quality Improvement. 1996; 4: 293-299.
Davies AR, et al. Outcomes assessment in clinical settings: a consensus statement on principles and best practices in project management. Journal on Quality Improvement. 1994; 20:6-16.
Davis DF. Measuring outcomes in psychiatry: an inpatient model. Journal on Quality Improvement 1996:22: 125-133.
DeToro I. Benchmarking. A Program for Implementing Industry Best Practices. Quality Network. 1991.
Detsky AS. Regional variation in medical care. NEJM 1995; 333: 589-590.
ED Management. American Health Consultants. New benchmarking group blazes trails in a unique collaborative effort. July, 1996 Vol. 8, No. 7, 73-76.
Ellwood P. Outcomes management: a technology of patient experience. NEJM. 1988; 318:1549-1556.
Espinosa J & Kosnik L. Practical Responses to Goals and Benchmarks Presented to the ICM Benchmarking Congress, Chicago, Illinois, November, 1995.
Evans RW & Jones ML. Integrating outcomes, value, and quality: an outcome validation system for post-acute rehabilitation programs. Journal of Insurance Medicine. 1991; 23: 192-196.
Fahrney P. Board Approves Grant Proposals. Section on CQI Newsletter. 1996 In publication.
Finison L. What are good healthcare measurements? Quality Progress 1992; April: 41-42.
Gift RG & Mosel D. Benchmarking in Health Care: A Collaborative Approach. American Hospital Publishing. Inc. 1994.
Gonnella JS, Louis DZ & Gottlieb JE. Physicians' responsibilities and outcomes of medical care. Journal on Quality Improvement. 1994; 20: 403-410.
Guadagnoli E, et al. Variation in the use of cardiac procedures after acute myocardial infarction. NEJM 1995; 333: 573-578.
Gura MT. Collaboration: the pivotal point for quality patient outcomes. J Nurs Care Qual. 1995; 9:53-58.
Hammer and Champy: Reengineering the Corporation. Harper 1993.
Hammer and Stanton: The Reengineering Revolution. A Handbook. Harper Business. 1996.
Henry GL. Board retreat focuses on "new world of change" and it's a bold, bad new world. President's Letter. November, 1995. ACEP.
Henry GL. Emergency medicine is not the problem. ACEP News. October 1995; 2-3.
Iezzoni L (ed). Risk Adjustment for Measuring Health Care Outcomes. Health Administration Press: Ann Arbor, Michigan, 1994.
Iezzoni LI, et al. Comorbidities, complications, and coding bias: does the number of diagnosis codes matter in predicting in-hospital mortality? JAMA. 1992; 267: 2197-2203.
Jenkinson C. Measuring Health and Medical Outcomes. University College London Press: London, 1994.
Kassiser JP. The quality of care and the quality of measuring it. NEJM 1993; 329:1263-4.
Knudsen K. The challenge of outcomes data collection and data interpretation. 1995. A lecture, presented at the Congress on Health Outcomes and Accountability, Washington, D.C.; December 10-13, 1995.
Laffel G, Berwick D. Quality in Health Care. JAMA. July 15, 1992;268:407-409.
Leeuwen DH. Are medication error rates useful as comparative measures of organizational performance? Journal on Quality Improvement. 1994; 20: 192-199.
Lewis BE. Conducting outcomes studies in managed care using a research approach. JCOM 1994;1: 65-68.
Markson LE. Overview: Public accountability of hospitals regarding quality. Journal on Quality Improvement. 1994; 20: 359-363.
McArthur CD & Womack L. Outcome Management: Redesigning Your Business Systems to Achieve Your Vision. Quality Resources: New York, 1995.
McNair CJ, Leibfried K. Benchmarking. A Tool for Continuous Improvement. Oliver Wight Publications, Coopers and Lybrand Performance Solutions Series. Essex Junction, Vermont. 1992
Megivern K, et al. Measuring patient satisfaction as an outcome of nursing care. J Nurs. Care Qual. 1992; 6: 9-24.
Miller S. Consumer satisfaction survey: an outcome measure for work services. JHQ. 1992; 14: 46-50.
Nash D & Relman AS. The outcomes field: plenty of sizzle, but is there steak? A debate, presented at the Congress on Health Outcomes and Accountability, Washington, D.C.; December 10-13, 1995. Zitter Group, Center for Outcomes Information.
Nash DB. Managed care: the role of outcomes management. Managed Care. 1992; 24:176-179.
Nelson EC, et al. Report cards or instrument panels: who needs what? Journal on Quality Improvement. 1995; 21: 155-166.
Nelson EC. Outcomes matter most. [editorial] JCOM. 1994;1:9-10.
Nelson E, et al. Improving health care, part 1: the clinical value compass. Journal on Quality Improvement. 1996; 4: 243-258.
Nugent W, et al. Designing an instrument panel to monitor and improve coronary artery bypass grafting. JCOM. 1994;1:57-64.
O'Leary D. The measurement mandate: report card day is coming. Journal on Quality Improvement. 1993; 19: 487-491.
Owens DK. Development of outcome-based practice guidelines: a method for structuring problems and synthesizing evidence. Journal on Quality Improvement. 1993;248-263.
Proenca EJ. Why outcomes management doesn't always work: an organizational perspective. Quality Management in Health Care. 1995;3:1-9.
Relman AS. Assessment and accountability: the third revolution in healthcare. NEJM. 1988; 319: 1220-1222.
Rockett BA, et al. Which rate is right? NEJM. 1986;314: 310-311.
Southwick K. Physician profiling: tool or weapon? JCOM 1994;1: 59-62.
Southwick K. Academic detailing: outcomes tool or sales pitch? JCOM 1995;2: 55-58.
Spalding J. Disease state management: danger and opportunity. Family Practice Management. 1996; March: 70-80.
Wennberg J, et al. Are hospital services rationed in New Haven, or over-utilized in Boston? Lancet 1987;1:1185-1189.
Wennberg J, et al. Hospital use and mortality among medicare beneficiaries in Boston and New Haven. NEJM 1989;321:1168-1173.
Wennberg J, et al. Small area variations in health care delivery. Science 1973; 182:1102-1108.
Williams AD & Perkins SB. Identifying nursing outcome indicators. JHQ. 1995; 17: 6-10.
Williams RM. Cost of visits to emergency departments. NEJM. 1996; 334: 642-646.
Williams RM. Health policy and quality: an ethical dilemma. JEM. 1993; 11: 345-349.

[ Feedback → ]