Join Section

Emergency Medical Informatics Section Newsletter - January 2007, Vol 12, #1

sectionHead_informatics.jpg

circle_arrow 12th Annual is EDIS Symposium Highlights
circle_arrow MediaLab Watch
circle_arrow Human-Computer Interfaces in the ED: Discount Usability Engineering: Part four of an ongoing series
circle_arrow The Rising Force of Standards and the Emergency Physician, Or...How EDIS Products are Moving Towards Regulation and Performance Consistency


Newsletter Index


button_informatics_section

 

12th Annual is EDIS Symposium Highlights

Alan T. Forstater, MD, FACEP

"Successful EDIS implementation is an attainable goal." This was the message at the 12th Annual National Symposium on Emergency Department Information Systems (isEDIS) held in Orlando, Florida, December 10th through13th.

The theme of this year's Symposium was "Success Stories" especially emphasized by a panel of the same name moderated by Course Assistant Director Al Villarin. The panel and the Symposium were punctuated with a steady dose of reminders of reality checks, pitfalls to avoid and the need for due diligence, with a team approach toward purchase, implementation and maintenance of a system. Many of the registrants attended the Symposium with a team from their hospital. 

The three and a half day conference schedule allowed for full mornings of formal presentations and three full afternoons of exhibit hall time. Registrants attended the exhibits armed with information from the morning presentations to intelligently evaluate the systems they are considering.

Todd Taylor opened the Symposium with an entertaining and informative overview of EDIS and a sobering caution to those who dare to "go into the water."

Clinical Provider Order Entry (CPOE) was the second theme. While Jonathan Handler presented the academic literature that challenged the value and pitfalls of CPOE in the ED setting, Rick Mackenzie portrayed a successful implementation of CPOE in his hospital system's emergency departments.

Rick Bukata presented the value of data collection to standardize best practices in his ED while Craig Feied presented the value of aggregated data collection from across the country to enhance biosurveillance, and other trends. Mike Gillam, Jonathan Handler and Vernon Smith led a workshop on data-mining so individuals can learn to access and portray data at their hospitals that heretofore seemed inaccessible.

Vernon Smith reviewed the academic journals for interesting and unique uses
of the Electronic Medical Record (EMR) such as alerting physicians to changes in their patients' critical laboratory values or possible candidacy for a research protocol, while Mike Gillam gave us a glimpse at cutting edge technology for future EMR applications such as software that formulates 3D reconstructions from CT images. Cathy Glenz RN, BSN, an Associate Course Director gave us a look at the EMR in Europe.

Advanced Track hosts Keith Conover, also an Associate Course Director, and Todd Taylor facilitated lively audience-interactive discussions of Passive Tracking, Standards; Implementation & Training; Physician Documentation: Paper templates, Dictation or Structured computer charting.

Cathy Glenz, RN, and Al Vieling, RN, presented a focus on successful EDIS purchase and implementation, while Al led a workshop on Return on Investment.
The concept of process redesign as essential to successful implementation was the focus of Todd Rothenhaus' talk and this was a theme that continually resurfaced throughout the conference.

Provider charting was featured in two ways. Todd Rothenhaus presented newly released recommendations by the EDIS Functional Profile Working Group of the Emergency Care Special Interest Group, that assist the buyer in objectively analyzing the charting capabilities of the various vendors' systems. Registrants were given model cases that they can use to test-drive an EDIS to simulate their real-life performance during documentation.

A charting demonstration gave seven vendors the opportunity to show how a nurse and physician-stand-in perform on their respective documentation system when presented with a mock patient in front of the Symposium audience.

Finally, the course offered a presentation by Richard Mackenzie, MD and Lisa Romano, MSN, RN, on successful implementation of technology that can be used to communicate between the admission's office, housekeeping and transportation allowing improved efficiency and monitoring of these services to improve flow from the ED to the inpatient side of the institution.  

Attendees agreed that the Symposium provides a unique body of knowledge presented in an informative and entertaining format. It affords the registrants an opportunity to evaluate the leading vendors in the EDIS field. One unique added value is that it provides a forum for attendees to network with colleagues who have similar ED information systems or who have similar challenges in buying or implementing one. This premier course is a must for anyone planning on buying or implementing an ED Information System.

 


 

 

 

Back to Top

MediaLab Watch

Mike Gillam, MD, FACEP

The Medical MediaLab is a Washington D.C. based research lab specializing in emergency medicine informatics. The lab is available for residents and medical students who wish to pursue innovative projects for medical informatics rotations. The following article briefly covers recent projects and research opportunities.

Holographic technology enters Emergency Medicine

Imagine never having to un-gown during traumas to review new labs, examine xrays and CT scans, and interact with an electronic medical record system. Now it's possible: using new technology from io2technology.com, images can be projected that float in air. The technology creates images that have a similar appearance to those seen in Star Wars: A New Hope, when R2D2 projects an image of princess Leia saying "Help me Obi Wan Kenobi, you are my only hope." The holographic display also acts as a touch screen, allowing you to interact with displayed elements just as you would with a computer mouse. The image is created by projection into an almost imperceptible change in humidity that creates a faint "haze" above the system. The MediaLab received the holographic system August 13th and has begun work on using it as a sterile interface to change what is seen on a larger 42 inch plasma display behind the hologram.

Ultra Wideband Tracking

0107medialab01Washington Hospital Center (WHC) is the first institution in the world to complete the installation of ultra wideband (UWB) based tracking technology across an entire hospital. UWB is an active tag RFID system using non-interfering nanosecond pulses spread across a very wide band of radio spectrum. Tracking tags are the size of credit cards. UWB is multi-path resistant, very low power consumption (button batteries in a tag last over two years), and do not interfere with existing utilized spectrum. WHC is using a UWB system from Parco Inc., the first system to receive approval from the FCC for UWB use in healthcare. UWB allows tracking of assets or personnel to a 12-inch area in three dimensions in the hospital (but as high as one inch in their lab). The implications of this technology are promising. RFID helped decrease contact tracing for SARS from two days to two minutes. Personnel tracking could help elucidate the spread of disease like MRSA or TB from patient to personnel. Team interaction and work flow analysis during trauma are also a potential area of study.

Virtual Wall - CAVE

0107medialab02The emergency department you work in today will likely look nothing like those that are built tomorrow. Today's ED designers rely on paper diagrams of the ED, and simulated fly-throughs - if available - are typically played at only the size of the monitor, making it difficult to exercise spatial-based reasoning and common sense. However, imagine you were able to stand inside a virtual representation of a future ED before it was ever built, and see for yourself whether the sinks were misplaced or the corridors were too small. This is the goal of the Virtual CAVE project at the MediaLab, which has just purchased a $100K virtual wall from FakeSpace systems, a leader in CAVE technology for industry. Using shuttered eye glasses and dual projectors, the system can create a 3D virtual wall which looks like a window to another place. The CAVE will be used for research into such areas as recreating a virtual representation of a real emergency department, and will be used in future phases of the group's ER One design project. In one project, data will float above patient rooms in the virtual space, simulating the "augmented reality" environment in which we may someday practice medicine. The CAVE will also be used to research new ways of immersively visualizing data which can be used to identify emerging diseases and bioterrorism threats.

Hand Gesture System

0107medialab03One of the earliest patients with SARS in Hong Kong infected almost 60 healthcare workers. In the recent past, Clorox completed a study of typical business workplaces and found one of the most infected areas is computer mice and keyboards: MRSA, VRE and other pathogenic bacteria have been cultured from both. It is because of these highly transmissible diseases that the MediaLab began research into gesture based interfaces. Our prototype is a system to review radiologic images while remaining sterile in a trauma suite or operating room. Most recently, the system was tested during a live neurosurgical case. The surgeons were able to flip through radiologic images while standing 15 feet away from the screen in a darkened operating room. The system was so successful, they were able to see anatomy on the digital images they had missed on the printed acetate images. By using the interactive display, the surgeons were actually able to avoid making a second hole in the patient's skull to perform a second biopsy. Currently, the interface is being improved with new features such as the ability to take 3D scans and move them in virtual spaces using gestures with two hands.

The MediaLab welcomes residents and medical students for rotations or long distance projects for research. If you have an interest in joining one of these or many other projects, please email: informatics.section@acep.org.

 

 


 

Back to Top

Human-Computer Interfaces in the ED: Discount Usability Engineering: Part four of an ongoing series

Keith Conover, MD, FACEP

In the first of this series, I tried to persuade you that your computer was human-illiterate, and we defined and discussed usability, memorability, and learnability. In the second, we discussed Tognazzini's Paradox: how the hardest part of designing an effective program is often what seems the most trivial-sometimes simply a matter of changing a single word. In the third, we talked about design integrity, simplicity and abstraction. Now, let's address "discount usability engineering."

When we talk about "usability testing" most of us think about expensive consultants, fancy labs with one-way mirrors and video recorders, and the like. Yes, usability testing can be done in such labs. Yes, companies like Microsoft have permanent million-dollar user testing labs.

But if you design, program or provide feedback on any portion of an ED information system-learn how to do some discount usability engineering. Usability guru Jakob Nielsen says: "I advise clients to avoid design agencies that are too arrogant to include user testing in their project plans." For that matter, if you are a user: do some quick and dirty usability testing to document on how bad (or how good) your system is-either to demand a better system, or to demand that the vendor provide usability updates!

There are many ways to do usability testing. Think of big, dedicated labs with one-way mirrors and video cameras. It works, it's useful, it's expensive. But it's not cost-effective. And it's beyond the means of most ED-IS or hospital-IS companies, not to mention users who want to document their usability complaints about a vendor.

Another darling of the marketing divisions of large software companies is "focus groups." And if you are an action-oriented emergency physician or nurse (or computer geek), and the term "focus group" makes you want to run in the other direction, your instincts are right. If you get a bunch of users together with an experienced facilitator, use standard icebreaking techniques to get started and then spend a day using standard brainstorming techniques, you end up with a fairly solid set of recommendations. Which turn out to be garbage. Yes, garbage. Good engineering studies show that the output of such processes is not useful in making software products more usable. Really. Jakob Nielsen says "Listening to what people say is misleading: you have to watch what they actually do." (See http://www.useit.com/papers/focusgroups.html for more.)

And user satisfaction surveys are just as bad. As Nielsen says: "what customers say and what customers do rarely line up; listening to customers uses the wrong method to collect the wrong data."

In 1989, Jakob Nielsen first promoted discount usability engineering. He'd looked at big, expensive usability labs and found their output wanting. He suggested that all you really need is five users. Later he and others did mathematical analyses that essentially proved that there is no need to test more than five users. (See http://www.useit.com/alertbox/20000319.html). In estimating how much time and effort it really takes to do a good usability test of a system, he finally decided (http://www.useit.com/alertbox/980503.html) that two work days is all it takes. And, a college usability class was able to, with 15 hours of lectures, do a full usability engineering assessment of a large commercial website in an average of 39 hours per team. So ask vendors about the results of their usability testing before you buy. If a vendor whines "we're not big enough to afford usability testing" just walk away and find another vendor.

Alan Cooper, the man who developed Visual Basic and sold it to Microsoft, has coined the term "User Interaction Design" to replace the old term "User Interface Design." He emphasizes design and has issues with Jakob Nielsen's approach. Cooper says (and rightly so) that usability testing doesn't help if you're testing is a jet-assisted 1964 Volkswagen beetle (or an equivalent software product). But once we get past this, we find that Cooper and Nielsen agree in important ways.

What's the best way to test a piece of software? Take the software (or a mockup of it using software called demo-ware), and put naïve users in front of it. Sit behind the users with a notebook and pen and listen to what they say as they try to use it. Don't ask questions until the user is all done with the task at hand, but answer questions when the user asks. Take notes. Lots of notes. Look for the "mistakes" the user makes. And make notes about these. And then figure out to change the software so the user doesn't make "mistakes." (There is even some talk about using pieces of paper with a design drawn in pencil to develop prototype systems-but this is quite controversial and we won't get into it here. See http://www.nngroup.com/reports/prototyping/.) Nielsen says the following:

  1. Get representative users
  2. Ask them to perform representative tasks with the design
  3. Shut up and let the users do the talking

The third rule is surprisingly difficult, while rule #2 requires some experience to execute well.

Yes, there are classes and textbooks so you can learn how to be better at doing this testing, and yes, experience helps. Knowledge of usability principles so you can identify classic usability problems as soon as you see them helps. But really, the process is quite simple and easy.

Let's give a few examples of classic usability errors that you can look out for when you do "discount usability testing" yourself. I'll give you just a few quotes from Jakob Nielsen's AlertBox online column. Although some relate specifically to Web usability, they apply to other software too.

Say, for example, that a user clicks the wrong button. It's obvious to any observer that such behavior represents a design error. Listening to users' comments prior to clicking usually tells you why they misunderstood the design, thus guiding you to make it better in the redesign.

The damage that unchanging link colors cause is one of the most tricky usability problems to identify in user testing. On any given page, users seem to understand the links just fine. Users almost never complain about link colors, as long as they're distinct from the rest of the text and reasonably legible. Life is good, or so it seems.

Observe carefully, though, and you'll notice that users frequently move in circles. They'll visit the same page multiple times -- not because they want to, but because they don't realize that they've already been there. Users will give up when they've tried most links in a list, even though there's one link that they haven't tried; if the links don't change colors, users don't realize that there's only one unvisited link remaining.

Use graphics to show real content, not just to decorate your screen.

Don't include an active link to the homepage on the homepage.

Study a wide range of people: the young and old, utter novices, experts, Unix geeks, sales staff, physicians, repair technicians, administrative assistants, executives, users of different nationalities.

Watch those people perform a wide range of tasks: shopping, searching, planning vacations, researching school projects, managing an erupting oil well.

Observe them using a wide range of interface designs and styles. Ideally, the interfaces should feature different ways of solving the same design problem so that you can compare and contrast how different design details affect usability.

Experiment with a wide range of interaction platforms, from wall-sized "virtual windows" to pocket-sized PDAs. It can also help to watch people use text-only designs like a mainframe or classic Unix, or futuristic technologies like VR that might be currently useless, but can serve as a source of ideas.

Observe the user's body language for indications of satisfaction or displeasure (smiles or frowns), as well as for laughs, grunts, or explicit statements such as "cool" or "boring."

The best usability tests involve frequent small tests, rather than a few big ones. You gain maximum insight by working with 4-5 users and asking them to think out loud during the test. As soon as users identify a problem, you fix it immediately (rather than continue testing to see how bad it is). You then test again to see if the "fix" solved the problem.

To collect metrics, I recommend using a very simple usability measure: the user success rate. I define this rate as the percentage of tasks that users complete correctly. This is an admittedly coarse metric; it says nothing about why users fail or how well they perform the tasks they did complete. … However, I often grant partial credit for a partially successful task.

"Real Users Don't Mind Complex Design"

Enthusiasts sometimes defend bleeding-edge technology and complex designs with the claim that users actually like sophisticated websites. Users, they assert, are smart enough to handle complicated design.

These enthusiasts labor under a miscomprehension about the Web's fundamental nature. It is not a question of whether users are capable of overcoming complexity and learning an advanced user interface. It is a question of whether they are willing to do so.

In testing multiple groups of disparate users, you don't need to include as many members of each group as you would in a single test of a single group of users. The overlap between observations will ensure a better outcome from testing a smaller number of people in each group. I recommend:

  • 3-4 users from each category if testing two groups of users
  • 3 users from each category if testing three or more groups of users (you always want at least 3 users to ensure that you have covered the diversity of behavior within the group)

In fact, Nielsen and others suggest that good usability testing alternates between "heuristic analysis" (simply going through the screens of a program with a list of usability principles like those above) and observational studies of actual users. For more practical tips on usability testing, see http://www.nngroup.com/reports/tips/usertest/.

Next Time:

The next in this series will address Personas. You probably don't know what that means, but we'll tell you that it's one of the key procedures to creating good software. Although we'll keep you in suspense until the next article about exactly what "Personas" means, we won't be displeased if you decide to research the topic yourself in the interim.

To Learn More

Jakob Nielsen is without question the Big Guru of usability, and his http://useit.com website is the first place to go. In particular, go to http://www.nngroup.com/reports/ and read through some of the established usability guidelines. Scan through his Alertbox: Current Issues in Web Usability columns (http://useit.com/alertbox), including many classic essays on discount usability testing. Developers should check out the 3-day "camps" on discount usability testing (http://www.nngroup.com/services/workshops/learnbydoing.html).
Nielsen's textbook Usability Engineering1 is a dry read, but the classic in the field. His Designing Web Usability2 is more readable, as is Homepage Usability: 50 Websites Deconstructed3; both have much applicable to non-web usability as well.

Alan Cooper's books4-6 also have essential usability information as well, and of them, The Inmates are Running the Asylum5 is an easy read and a succinct source of his ideas on usability testing.

References

  1. Nielsen J. Usability Engineering. Boston: Academic Press,; 1993.
  2. Nielsen J. Designing Web usability. Indianapolis, Ind.: New Riders; 2000.
  3. Nielsen J, Tahir M. Homepage usability : 50 websites deconstructed. [Indianapolis, IN]: New Riders; 2002.
  4. Cooper A. About face: The essentials of user interface design. Foster City, CA: IDG Press; 1995.
  5. Cooper A. The inmates are running the asylum. Indianapolis, IN: Sams; 1999.
  6. Cooper A. About face 2.0 : the essentials of interaction design. Indianapolis, IN: Wiley; 2003.

 

Back to Top

The Rising Force of Standards and the Emergency Physician, Or...How EDIS Products are Moving Towards Regulation and Performance Consistency

Donald Kamens, MD, FACEP, FAAEM
Todd Rothenhaus, MD, FACEP

Those of you who have read our previous columns have a general picture of what is going on. Those who are in need of a refresher can refer to the attached timeline.

Bottom line first: why should you, an ED physician with more than a casual interest in informatics, bother to read this brief article? Consider what things actually move ED physicians to take interest: patient care issues (probably); reimbursement (yes); liability (likely); shift efficiency (definitely); overcrowding (perhaps enough words already written); nursing shortage (what's new); CPOE (gag me); but standards (nah! no way!).

Okay, you made it through the first paragraph. But that's before things got really dry. Why read on? Well, let's just look at what the standards impact is about to do for (or to) us, in the very near future. This is a bit of conjecture-crystal balling, but it has the consensus of a number of us working in the field, and far from being too distant, we believe it begins sometime in 2007, and gets into fuller swing in 2008. Here are a few things you can expect:

  • A Standardized Profile for ED Information Systems
  • Certification of EDIS Products
  • Easier Interoperability \ Integration \ Implementation of EDIS products with core hospital systems
  • Interoperability of EDIS products with Regional Health Information Systems (RHIOs)
  • Reimbursement tied to use of Electronic Health Record Systems (EHR-S)
  • Use of electronically transmitted claims from EHRs (EDIS) systems
  • Increasing employment of Personal Health Records by patients, containing key information
  • Increasing availability of PHRàEHRs (EDIS) information transmission

How is all this going to happen? To answer that, we need to stir the acronym-soup a bit. To avoid confusion (when avoidable) acronyms will be defined as used. So far we have used CPOE (Computerized Provider Order Entry), EDIS (Emergency Department Information System), EHRs (Electronic Health Record System), and PHR (Personal Health Record).

Andy Tannenbaum, well-known computer scientist once said: "The nice thing about standards is that there are so many of them to choose from." How true. How true. But, Andy does not live in the US (he is a professor at the University of Amsterdam), and his reimbursement does not derive from HHS-CMS-Medicare. Indeed, it is the U.S. government, the largest payer in America, that is influencing the choice of standards, and their impact, through a central HHS commission, the American Health Information Community or AHIC.

American Health Information Community (AHIC):
(http://www.hhs.gov/healthit/ahic.html)

AHIC has been given responsibility for a 10 year plan that began in April, 2004 for all Americans to have electronic health records. As of April of 2007 we are 3 years into the plan, and it is moving swiftly. It is an august body (despite the fact that they meet seven other months of the year), and has such members as: Craig Barrett, Chairman, Intel; Scott P. Serota, President and CEO, Blue Cross Blue Shield; Julie Louise Gerberding, M.D., Director, CDC; Robert C. Cresanti, Under Secretary of Commerce for Technology, U.S. Department of Commerce; and others. It is chaired by HHS Secretary Michael Leavitt.

AHIC, also called the "community", has the authority, responsibility, and funds to determine priorities, create projects, and choose from among those who bid for government grants to accomplish the priorities chosen. Examples of key current projects and underlying commissions include: CCHIT (for certification of EHR products); HITSP (for establishment of specifications for interoperability); multiple partnerships (for establishing model RHIOs); and others for security and privacy. The community philosophy is "hands off" work, and significantly often envisions the patient as primary consumer, passing HHS key "use cases" when the need for addressing a pressing national health IT issue arises. These use cases mimic, or attempt to mimic the circumstances that would be faced by health care individuals or health care systems in frequent or possible situations of concern.

The most recent use case charge by the community-one that will have significant impact on the ED-is the AHIC Emergency Responder Use Case, which derives both from the Katrina experience and apprehension over emerging infectious diseases and concern about chemical, biological, radiological, nuclear, & explosive threats to the population. This use case intends to define a pathway for the exchange of electronic health information from incident through definitive care, and promises to foster increased adoption of EHR-S in both the pre-hospital and ED domains. ACEP and members of the informatics section have participated in development of this use case development. For more information, see: http://www.hhs.gov/healthit/documents/AHICEmergencyEHRDetailedUseCase.pdf

Certification Commission for Health Information Technology (CCHIT):
(www.cchit.org)

The Certification Commission for Health Information Technology was founded in 2004 by three HIT industry associations - the American Health Information Management Association (AHIMA), the Healthcare Information and Management Systems Society (HIMSS), and The National Alliance for Health Information Technology (Alliance). CCHIT is a voluntary, private-sector organization to certify HIT products. In September 2005, HHS (through AHIC) awarded CCHIT a three-year contract to develop and evaluate certification criteria and create a certification process for HIT products in three initial areas:

  • For Ambulatory EHRs for the office-based physician or provider
  • For Inpatient EHRs for hospitals and health systems
  • For Network Components through which the above will interoperate and share information

The first ambulatory EHRs were certified in March of 2006. CCHIT then began to consider its approach to Inpatient EHRs. CCHIT also considered the scope of certification of other HIT systems and chose to view each specialty EHR-S as having one primary focus: professional specialty, care setting, or patient population. As examples, an EHR-S for cardiology would be considered as a product developed of a professional specialty, an EHR system for home health care would probably be considered as a product developed for a particular population, while it seems logical that EDIS would best be considered a care-setting. While CCHIT recognizes that there may be some overlap between the three areas, it is thought that this approach will promote participation by organizations, help CCHIT adopt criteria for evaluation, and open the certification door for those areas that are already somewhat mature.

Within the past three months, CCHIT determined that sufficient resources and funding are available to consider development of two or three certification pathways for these domains. ACEP and the EDIS Working Group have been working with CCHIT to promote the EDIS niche as a next step, and we feel it is highly likely EDIS will be chosen within the next year or two for certification.

Healthcare Information Technology Standards Panel (HITSP):
(http://www.ansi.org/standards_activities/standards_boards_panels/hisb/hitsp.aspx)

Mentioned above and sounding somewhat like a spewed out guttural utterance, HITSP stands for Healthcare Information Technology Standards Panel. Chaired by emergency physician John Halamka of Harvard, HITSP was formed to determine which standards would be most appropriate to choose from the growing output of standards development organizations (SDOs), including HL7, ASTM (American Society for Testing and Materials), IEEE, and numerous others.
You might ask, why develop an entirely new organization to harmonize standards instead of giving the job to an existing standards developer like HL7 or ASTM? That's a good question. In part, the answer lies in human cooperative nature, something we, as emergency physicians know only too well. Indeed, SDOs sometimes have trouble playing together, each wanting to promote its own standards above other. One example of this was a pivotal spat between HL7 and ASTM over the CCR (Continuity of Care Record) and its relationship to the CDA (Clinical Document Architecture), both of which were developed to facilitate the electronic transmission of clinical documents. Initially developed as a content standard for a "summary of care" document, the CCR was promoted as a potential electronic solution to a number of other transfers of care. Simultaneously, HL7 was promoting CDA as an XML standard for the transmission of structured clinical documents between electronic health record systems. The dispute was resolved (or "harmonized" in standards parlance) by creating a CDA schema of the CCR called the CCD (Continuity of Care Document). In it's wisdom, the CCD has been adopted by HITSP, even before full balloting! Most good standards, are decided by ballot. For a discussion, see: http://www.jamia.org/cgi/content/abstract/13/3/245.

HITSP's work has so far extended beyond ending disputes, to the development of specifications for EHRs, biosurveillance, and to Consumer Empowerment interoperability. It next plans to bite off the AHIC Emergency Responder Use Case. HITSP and CCHIT are working together jointly to harmonize their specifications (HITSP) and criteria (CCHIT) so that there is a unified approach that can be grasped by EHR vendors and consumers, alike.

Health Level Seven (HL7):
(www.hl7.org)

Health Level Seven is one of several American National Standards Institute (ANSI) -accredited Standards Developing Organizations (SDOs) operating in the healthcare arena. However, Health Level Seven is not the 7th floor in a hospital, nor is it a version that comes after Health Level Six. Rather, "Level Seven" refers to the highest level of the International Organization for Standardization (ISO) communications model for Open Systems Interconnection (OSI) - the application level. The application level addresses definition of the data to be exchanged, the timing of the interchange, and the communication of certain errors to the application. The seventh level supports such functions as security checks, participant identification, availability checks, exchange mechanism negotiations and, most importantly, data exchange structuring.

Historically, of course, HL7 has been equated with, and thought of only with, respect to messaging. Just bring up the acronym HL7, and any discussion beyond messaging (say about EHRs) gets quizzical looks as if you had just somehow wandered off the Star Trek set. And indeed, when one discusses HL7 with most IT people, it is messaging that comes up: usually some discussion of HL7 Version 2.3, 2.4, 2.5 ("2.x" is the preferred lingo, since they are largely the same) and the emerging and entirely re-written version 3. However, HL7 has become more.
Sometime in the last decade, when the apparent need for standards in healthcare informatics models became apparent, HL7 took the lead and developed the Electronic Health Record "Functional Model". The EHR-FM, still in development, was used by CCHIT in developing its criteria for the certification of Ambulatory Care products.

Anyone can join HL7 and participate in the work. There are technical committees (TCs) and Special Interest Groups (SIGs) for all tastes (Clinical Documents, Patient Care, EHR, Clinical Decision Support, and on, and on). TCs and SIGs meet by teleconference on a regular (usually weekly basis), and in person three times a year at the HL7 Clinical Workgroups.

ED-EHR Standards Group and the HL7 Emergency Care SIG:

The Emergency Care SIG was founded two years ago to inform HL7 and other standards development organizations (SDOs) of the unique HIT requirements and workflows of emergency care. The EC-SIG co-chairs solicited input and membership from domestic and international emergency medicine specialty societies, including but not limited to, ACEP, SAEM, AAEM, ENA, and the Canadian and Australasian Emergency Medicine Societies. Participation was also solicited from the vendor community through invitations and presentations.

In 2005, American College of Emergency Physicians (ACEP) sponsored membership in the EC-SIG in the form of four memberships and two funded awards for travel and expenses. Sufficient infrastructure was secured from an unrestricted grant from X-press Technologies to set up an intranet site and weekly teleconferences to supplement face to face meetings at HL7 conferences. It is important to note that participation in the EDIS WG does not require membership in HL7, although HL7 (and occasionally IHE) are the principle auspices of the work we do. Members of the EDIS-FP include physicians, nurses, medical informatics experts, EDIS developers, engineers, and other representatives from the vendor community. Our principle work centers on the development of a functional profile for EDIS and a revision of Data Elements for Emergency Department Systems (DEEDS). Dan Pollock, an emergency physician at the CDC was the leader of the group that created DEEDS, and remains an active member of the SIG.

As with any good garage band, a number of "side projects" of arose. Our WG/SIG participated in a major criticism of a federal notice of proposed rule-making (NPRM) on electronic attachments for emergency department claims. Led by SIG co-chair Kevin Coonan, the WG pointed out a number of important technical and administrative shortcomings in the proposed rule. This required some diversion of energies, and took the group into consideration of reworking the DEEDS (Data Elements for Emergency Department Systems) specification, initially released in 1997. After some discussion, the SIG decided to accept responsibility for updating DEEDS, which had not been revised since its first release. The current DEEDS first revision (1.1) is expected to be released in early 2007. After that, work on a more robust DEEDS 2.0 will begin.

Informatics section members are encouraged to join the calls (Thursdays, 11am-1pm ET; theoretically-but not always). We encourage interested members of the informatics group to Contact one of the co-chairs for agendas and call-in information.

0107edis

 

 


Back to Top

This publication is designed to promote communication among emergency physicians of a basic informational nature only. While ACEP provides the support necessary for these newsletters to be produced, the content is provided by volunteers and is in no way an official ACEP communication. ACEP makes no representations as to the content of this newsletter and does not necessarily endorse the specific content or positions contained therein. ACEP does not purport to provide medical, legal, business, or any other professional guidance in this publication. If expert assistance is needed, the services of a competent professional should be sought. ACEP expressly disclaims all liability in respect to the content, positions, or actions taken or not taken based on any or all the contents of this newsletter.

Feedback
Click here to
send us feedback