February 12, 2024

January 2024 - Artificial Intelligence to Improve Performance in Emergency Medicine (AIIPEM) Program

Read the full Transcript

- [Emily] Happy 2024, everyone who's on the call. We also are recording this, as you saw coming on. So we'll be posting this to our website. We are fortunate to have one of our own section members and someone very well known in the emergency medicine telehealth and telehealth sphere presenting to us today about even more innovations now, and sort of how AI is gonna be coming in and helping us both in our in-person care as well as our telehealth care. So, Ed, I'm gonna let you introduce yourself with all of the, all the work that you've done so far and the work with EmOpti, and then also what sort of led you to the current collaboration, too. So I'm gonna hand it over to you now.

- [Ed] Okay, thanks, Emily. Pleasure to be here today. I'm Ed Barthell. I'm an emergency doc. A member of ACEP for 35 years or something like that. Practiced with a group based in the Midwest, in Wisconsin and Northern Illinois, for a lot of years, and then have been involved with informatics since the early days of the ACEP computer section, up through the current time. More recently, I've been the CEO of a company called EmOpti that is very involved with telehealth and trying to help improve patient flow in emergency departments. We've got a project going now with artificial intelligence. And with AI being such a hot topic for the whole industry, I thought I'd be able to bring you up to speed here. And happy to have time for a few questions toward the end of this discussion. We call it the AIIPEM program because it stands for Artificial Intelligence to Improve Performance in Emergency Medicine, which is too long to say, and we may come up with a better acronym at some point in the future. But we'll also be talking about it in terms of what we're calling the intelligent facesheet, which was interesting at ACEP, when we explained the facesheet to some of the real young docs, they didn't even know what a facesheet was. But for us older folks, the facesheet was the thing that they gave you on a clipboard before EMRs really became the main interface with our records, so. So we'll talk more about what that means. The agenda. I'm gonna get a little bit of a context. I am gonna cover some AI basics. I know this is more of a telehealth group. We've also had a somewhat similar discussion with the ACEP technology committee. And then we'll into more of the specifics about what the AAIPEM program's about. The picture on the right doesn't have any real significance other than it kinda looks cool and it'll give you a sense of where we are in this agenda. So you'll see that again as kind of a way of breaking from one section to the next. I like to start with a little bit of comments on just exponential thinking. You know, as human brains, we are very oriented around linear thinking, whether it's our ancestors trying to determine if they're gonna have enough time to run from one place to another to get away from the tiger, or if it's, more recently, just thinking about what we're gonna do on our schedules over the next month, we tend to think in a linear fashion. So it's sometimes difficult to really apply exponential thinking, but we have to as we go forward here because the world's data volume is increasing exponentially. And the healthcare data volume, which is a substantial component of the overall world data volume, is also increasing exponentially. I mean, the scale on the left there is zettabytes. And a zettabyte is a trillion gigabytes. RBC says it's growing at 36% a year. The IDC picture here is talking about it growing 48% of the year. So if you think about from 2013 to 2020, if we put all the digital data in healthcare onto a stack of tablet computers, it would be 5,500 miles high in the year 2013, or about 3% of the way to the moon. By 2020, it would be 82,000 miles high, about 1/3 of the way to the moon. And it would be a whole lot bigger, then, over the next seven years, where, all of the sudden, you can go back and forth to the moon multiple times with that stack of tablets. And that's just hard for us to comprehend as humans. At the same time, technology and the capabilities of technology are increasing exponentially as well. Most of us have heard of Moore's Law, which just tells us that the number of transistors or the number of calculations per second for a constant dollar have been going up exponentially. You'll see the graph. On the y-axis is exponential growth, and that just, and that's why this looks relatively linear, because you're actually seeing exponential growth in capabilities. And similarly, whether it's optical fiber or data storage, these are all growing at exponential rates. So the capabilities of the technology we're working with are also growing, just like the data volumes that are available are growing that way. So, and when we envision the future, even just a few years in the, you know, going forward, we tend to wanna think incrementally about what's gonna change, but things are gonna change faster and faster and faster going forward. So as we think about artificial intelligence and medicine, there's a lot of people projecting where we're gonna go. There's articles out there in the traditional medical literature that start to project how AI is gonna fundamentally change healthcare. Even yesterday in one of the newsletters, "How AI can make your next ER visit less stressful." There's lots of opinions about how AI's gonna change our healthcare environment. So much so, that you think, okay, how am I gonna deal with all this exponentially increasing data volumes, exponentially more capable technology? And just remember good old Desmond Tutu saying, "There's only one way to eat an elephant, and that's at a bite at a time." So this is a telehealth group. We're gonna talk more specifically now about AI and telehealth. Again, there's articles out there about the role of artificial intelligence in the telehealth domain and how these two industries are gonna become increasingly merged together. Just like AI is gonna permeate all sorts of different industries, there are a number of ideas of how that's gonna happen. I tend to try to look at it in terms of what we call the traditional care process, which, again, tends to be relatively linear, but there's stages, and we can think about how AI may impact all these various stages of our traditional care process. It can impact interactions between clinicians and patients before they end up at the hospital or the emergency department, during each of the different stages of the care process when they're in the emergency department or in the inpatient wards, and in follow-up, at different stages of follow-up and trying to keep people well and keep 'em out of these acute care facilities. There's a lot of applications, obviously, for telehealth. And then when you think about AI, each of these stages can be affected by AI going forward as well. So you'll hear about our AIIPEM project, which is very focused on the intake process in emergency departments, which is where my company, EmOpti, has had a lot of experience, just influencing traditional care by using tele-triage. And we're gonna talk about how AI could, you know, impact that process. But there's other stages where it also undoubtedly will come into play. AI-generated instructions for patients have been shown that they can be done in a way that they're more empathetic than the typical, average doctor's discharge instructions. So in many respects, we're gonna find AI engines can help us with healthcare, and in some cases, do a better job than humans. There has been a lot of emphasis on physician productivity. And as we think about it, again, kind of the linear thinking as well, there's some point between two and three patients an hour that is probably the right number of an average ER doctor to be evaluating and caring for patients in an emergency department. That, to me, is just incremental thinking. I think if we really wanna think more exponentially, we should be thinking about how one emergency physician's gonna service 25 hospitals at a time. All these reports that have been done recently on the emergency medicine manpower and how it's gonna evolve over the future. And there's a lot of hand wringing going on about that. I don't think very many of those reports are really thinking about how the combination of telehealth and AI-assisted telehealth might enable one emergency physician to service as many as 25 hospitals. So we're not talking about going from two to three patients an hour. We're talking about going from two to 20 patients an hour. Okay, so pause here for a second, and I'll go into a little bit of the basics of AI, and then you'll see how we're gonna apply it through our AIIPEM program. We begin with AI, of training what's called an AI model. And you'll hear the term model used all the time by AI practitioners. They wanna create a model, train the model, modify the model. The model is done by using historical data, and lots of historical data. And generally, more historical data means you can build a better model. And then you wanna make inferences off of that model for a given purpose. And so that's the core steps in generating an AI model, is you use different tools to do that. I'm just gonna try to get my list out of the way. There. And the different tools let us generate outputs. But in general, AI models are really certain amount of inputs leading to a certain amount of outputs. Not that complicated. So then you'll also hear the term about an inference engine. So the model is built generally looking at historical data and using a lot of computational resources to build it. And you can think of it as a way of generating an elegant algorithm based on that historical data. But then that algorithm is what's used by the inference engine so that when you put new data into the inference engine, you can generate some kind of output. So it may take hours and hours of processing to generate the model on the left side. But if you're gonna use it in real time for patient care, you just need to get the algorithm that's generated by the model and put it in that inference engine so that now, when new encounter data comes in, in this case from an electronic medical record, you can generate visualized outputs fo the end users. And there's a thing called an application program interface, or API, that you create that's part linked to the interface. or inference engine, that lets the visualization of the output occur in various different ways. And it can occur within the construct of an electronic medical record or it could occur, the visualized output might be an end user application that just runs independently on the web, or in other integrations with other types of end user applications. Then, when you use the inference engine and the API and the application in patient care, you're gonna generate additional information or additional data. And periodically, you use that new data that gets generated to go back and retrain the model, because retraining the model can take hours and hours of computation. So the model creation might be something we do once a month or once every six months, but you wanna be able to have it operate in real time during care. So the left side of the screen, that refreshing or retraining the model, is something that's done periodically, kind of in a, think of it as like a batch process, whereas the right side is the real-time use of the model and its capabilities. Hopefully that makes sense for folks. The retraining of the model doesn't have to just be based only on the same kind of structured encounter data. Now that we have these new large language models, you can do things like ingesting entire textbooks. Or we've all heard about ChatGPT, which ingested a tremendous amount of data from across the World Wide Web, and that can train the model to be even more accurate in its algorithm that it generates that gets embedded into the inference engine. And, by the way, I have not licensed Tintinalli yet to go into our model. I'm just putting that up as an example. Okay, so now we'll get into more of the details of the actual AIIPEM program. I first have to give some credit to Leap of Faith Technologies. Frank Naeymi-Rad, who's their chairman, was former founder and chairman of a company called Intelligent Medical Objects, and did real well with that. And now he has Leap of Faith Technologies as an informatics company. And they fund, as Frank likes to describe it, "We like to fund cool projects in healthcare." So they're helping sponsor the program, and we really appreciate that. And the challenge we were trying to react to, as many of you know, John Halamka, who worked in Boston for many years as CIO and now is president of the Mayo Clinic platform. And just a little less than a year ago at the HIMSS meeting, he said, "You know, we program, we'd like to be able to say we programmed ChatGPT or some other model with millions of de-identified patient charts. And so we can say that the patient in front of you right now had the following treatments from other clinicians." He said, "That would be lovely." He said, "We're not there yet." And then we said, "Well, yet is just a timeframe. That we're exponential thinkers. We wanna get past it right away. So let's use that as our goal." So we said, "Let's create an intelligent facesheet that uses a generative AI model, and the associated inference engine will provide useful outputs to emergency clinicians at the intake process of emergency care." So there's three kinda main steps that we do. One is we create and validate that model. Then we build the API and either a web app or an integration with EMRs for deploying the inference engine. And then we deploy it in some beta sites and evaluate how well it can impact the care process. Pretty simple, one, two, three project plan. So you can envision that maybe within the Epic screen, using the clinical decision support links mechanism that is built within Epic, that it'll pop up a whole intelligent facesheet that has descriptions of things like patients like this, tests for patients like this, that say, you know, things you might wanna consider. And then we thought, you know, beyond just being able to show statistically what other patients like this end up with, it's sometimes helpful to get reminders. And this is where we've also partnered with the Sullivan Group, who are really kind of the market leader for understanding risk mitigation, and knowing all the case law and cases where emergency physicians have been sued or things have gotten missed or there's been errors. 'Cause there are important things we need to rule out that may not be in the top percentage of cases and diagnoses, but they're important. An important part of our job as ER docs is to rule out certain things. And in ruling those out, we should be sure that we're documenting certain things so that we don't end up missing things 'cause we're overtired or too hurried and our ER's overwhelmed. And so whether it's displayed within an Epic screen, embedded as a window that pops up, or in a side panel and used perhaps by a remote tele-triage doctor or PA, you could visualize this kind of data being popped up to try to just assist with our decision making. Getting back to the three steps, we're gonna build and train that model. There's a data collection and pre-processing stage that goes through as you develop and train the model. There's a validation testing component. And then the API has its own steps that we have to follow. And then there's development and deployment and integration with an EMR. And then using it in actual data sites to prove that it actually impacts care in a positive manner. And there's different groups that we've got in the program plan that are all working with us. We're still in the early stages of building, training and validating the model, but we've made some interesting progress on this way. And this diagram, I'll even say isn't necessarily truly accurate 'cause it suggests a waterfall development process. And it's really, when you use agile development, there's a lot of iterations that go on where you're testing things, deploying things, designing, building, seeing what the results are, and then rinse and repeat. So this is the core of what we're doing with the project. We're taking, as an input, chief complaint, arrival method and other types of things that you would have at the front end of an ER visit. Think of the data elements that are available at the time of triage. It's vital signs, chief complaint are the big items. And then we're trying to generate these five outputs. What are the likely diagnoses for a patient like this? What are the important things we're gonna rule out? What should we consider in terms of the orders for this patient? What might be the documentation tips and things we should be sure we document so that we don't miss things and have errors and to mitigate our malpractice risk? And what's the likely disposition, and can we have that impact the care process and speed the patient flow as a result? So the first, third and fifth are pretty much statistical, a hundred patients like this. Now, what the model does is finds the hundred patients out of 500,000 or 5 million that are most like the patient that's in front of me now, which is a non-trivial task. And then we add in, as we train the model, the risk mitigation knowledge base that comes from Sullivan Group, which tells us the important rule-outs and the documentation tips to mitigate risk. So we've already did the initial extract, and I have to thank our partner d2i, who's got one of the larger encounter databases of emergency medicine encounter data in the country. And we've done an extract initially. We, of course, extract all the patient identifiers so that it's all a HIPAA-compliant process. And we've got a simple data table to just start working with the data in general. You'll see our initial extract was 535,000 encounters. We pulled out the age, the sex, the time, the chief complaint, the vital signs, the diagnoses, the primary diagnosis for the patient, and the disposition, whether they're admitted or not. We're just getting ready now to take our next date extract that'll bump this up to 5 million encounters. And it will also include the orders that were entered on any given patient, whether it's for tests like labs and X-rays, or medication orders. And it'll also include the full text encounter notes that are generated by a physician at the end of the visit. So we'll have much richer data to be able to do the next generation of our model. And just playing with this simple tool that we've got, you can see, for instance, that out of those 535,000 visits, if you say the patient chief complaint contained back pain, there's almost 15,000 visits of back pain. If I was gonna scroll, I've got a whole list of those. The rows down in the bottom section is actually 15,000 rows down if I wanna go one by one and look at 'em. I can filter it based on age group, et cetera. And, you know, I'm saying it contains back pain in the chief complaint phrase, whether it's low back pain, upper back pain, et cetera. Out of those, you know, of the risks is we miss cases sometimes of epidural abscess. So just as an exercise, we said let's put in, out of those 535,000 visits, how many of 'em ended up with a diagnosis of epidural abscess? It actually turns out that one out of every 26,000 visits is an epidural abscess. Now, I saw about 65,000 patients in my career as a clinician. So I don't think I ever diagnosed epidural abscess, which suggests either I was really lucky or I just missed 'em. But luckily, I didn't get sued for missing 'em. It's kinda interesting, though, out of those patients that had an epidural abscess, only 1/2 of 'em present with a chief complaint of back pain. But the types of documentation tips that we'd get from having this type of system that we're building would say, "Hey, I should really be asking my back pain patients routinely, have they had pain at multiple levels? Have they had intermittent fevers? Have they had a history of IV drug abuse as a risk factor for epidural abscess?" And if that was put into my documentation routinely, it would certainly decrease my risk if I did miss a case. So you'll hear about different types of tools that are used for training, building and training models, whether it's neural networks that have been around for a long time. You know, there's other things, random forest, there's just dozens of different types of tools, many of which are the more traditional tools used by data science scientists who are developing these AI models. Nowadays, all the emphasis has been on these large language models. And for those that haven't really dived into this, whether it's open AIs, ChatGPT that's gotten all the publicity, or some of the other ones from the other big technology vendors, it's very much based on text as opposed to being based on, you know, highly cleansed tabular data. And what these large language models do is look at the frequencies of one set of texts will follow another set of text. And using very elegant mathematics, it creates these hyper-dimensional models that say, given that this text is one of the inputs, the likely output is this other piece of text. So it's very much text-based. You've probably heard of some of the powerful findings you can get with these types of tools. You've also probably heard that there's some things called hallucinations, where, unfortunately, these tools can also make up some things, which is not what we wanna see in healthcare. We're just starting to apply some of the large language model techniques to, in addition to some of the ones that we've done that are more suited for tabular data. We have an app now that's up and running. It's an early version of an app, but, you know, we can enter any of these elements that you see on the left side and see how it impacts the categories we talked about. I think I mentioned our next data extract is gonna have the order consideration, or orders on the 5 million cases. And we're just putting in the documentation tips as well. So those two sections aren't fleshed out. But you can see that if I put in a chief complaint of chest injury that came in by private transportation and with fairly normal vital signs, you know, there's about a 20% chance of a rib fracture. There's another 20% that are superficial injuries. You know, if they come in by private transport and normal vitals, there's about a 17% admission rate on these patients. And it's interesting 'cause you don't necessarily think about it in this way, but I found that just going from private transportation to an ambulance arrival, now at 70% of them are gonna be admitted. So just that change in how they arrive is gonna impact whether or not we should be calling upstairs for a bed right away. And if they're tachycardic and come in by ambulance, now we're all the way. I've got a typo on my left side there. It's actually, you'll see arrival method in the table is ambulance. But with the tachycardia there, now we're up to an 86% admission that I can predict right at the time of triage.

- [Judd] So, Ed, I have a question for you. This is Judd.

- Hang on. I've got two more slides, and then we'll take questions, okay? Thank you. I just wanna mention, we wanna roll this out to beta sites. We're targeting spring of this year for beta sites 'cause we totally believe that we have to put this into clinical environments and see what kinda impact it has. We have an advisory board already together that's been looking at this process, starting on the validation of the model. And we believe strongly that a model like this should be an advisory to clinicians making the final decisions. And we have some milestones coming up. We're at the HIMSS meeting in Orlando in March 2024. We'll be demonstrating some of the early versions of this. And we're open to other people joining the advisory board or telling us, "Hey, we might wanna be considered for an early adopter program or a beta site." And so I'm happy to talk to anybody from this group about that possibility. Okay. Judd, fire away.

- [Judd] So I was going to ask, so I'm trying to look at AI, which is I think confusing to all of us, in how we're gonna use it and go forward with it and make things work. And I was having a conversation last week with a CMO of one of the top two or three insurance companies in the country, who, talking on a slightly different topic, but really talking about RPM at that point, said, "You need to prove to me before we pay for this that it basically could get through the FDA like a pharmaceutical could get through the FDA. That it actually makes a difference in patient outcomes rather than it's cool, it's shiny and everybody thinks it's neat." And although that conversation was about RPM, I think in the framework of health systems not making money, no one really coming to a health system to give us stuff for free forever, all the good folks like yourself, if you have a great AI product, you're ultimately gonna wanna sell it to a health system that has no extra money. And so the question is what is really gonna be value added in terms of hard ROI? So I can easily believe like doctors will be happy, it'll make their life better, they could see more patients. But my question is, actually, how are you gonna get there? And then my first question is the examples you showed me for triage, saying they're gonna be admitted, I'll challenge your AI program. Me and two other doctors, you could pick 'em, and I bet we could do within a rounding error just as well looking at the information on a triage note. So how are you gonna improve over the excellent capabilities of all of our colleagues of people on this phone, rather than just have a computer program that could do it, as compared to one of us glancing at it? So that's my provocative question. You know you should expect nothing less, Ed, so thank you.

- [Ed] That's my last question here on this slide that I had to go through real fast 'cause I knew it was coming. For this project, just trying to not have to address the productivity benefits that I think will occur and coming up with a value equation that creates that ROI upfront. The average cost for mitigating risk, in other words, for paying for premiums for a malpractice program or for malpractice insurance on a emergency medicine department that is paid by a health system, typically self-insured through their own captive or some variation, ranges in the eight to $10 per-visit range. And Sullivan Group has all this data. It varies by state 'cause different states have different risks and different regulatory limits, et cetera. But think of that as being eight to 10 bucks per visit. The goal is to cut that in 1/2, and that generates an extra $5 of revenue per visit through saving malpractice risk. And we're targeting trying to deliver this for a dollar or less per visit. So that would get you to a 5X return on investment for the health system. So that's the thing we're looking at right now, Judd. And there's a lot of validation that has to go around that, and it may vary for different states and different individual facilities.

- [Judd] So allow me to push back on that in a friendly manner. That, let's say I pay my $8 a visit for malpractice today. I don't think I can go to my malpractice carrier today and say, "Hey, I got this cool new program. Will ya cut me a discount?" Nor do I believe I could go there next year or maybe the year after, depending on the statute of limitations and the time between the commission of malpractice and the settlement of verdicts come in. It's, at best, a several year process to show it makes a difference. And then we did this analysis with respect to virtual nursing programs, back and looked at three years of retrospective data. Again, an entirely different thing, but asked how much money could we save on malpractice? And the answer was basically nothing because a lot of our enterprise-wide malpractice stems from procedures. And all the AI in the world isn't gonna make up for a surgeon leaving a lap pad in the belly or operating on the wrong leg or the ER doc doing a procedure poorly and having complications. So although I don't disagree that there's likely something there, I guess I'm just gonna throw out, you know, a little skepticism that I could walk into my C-suite and say, "I'm gonna save money tomorrow, so let me buy this today," because I think you're gonna be, even if everything you say is true, it may be several years 'til you can see a convincing benefit on that.

- [Ed] So we'll have to set up a side discussion, Judd, with Brant Roth of Sullivan Group, who was just in the Cayman Islands, where all these captives are based, and they have a big annual conference. And he was talking to a number of the administrators of these captive programs who are starved for things that will decrease risk for their clients and are willing to, upfront, if you have a good program, they're willing to cut your rates. So we can take it offline, but I think he's the one that's more of an expert even than me on this particular potential return on investment.

- [Judd] And I don't pretend to be an expert, so, you know, I hope you're right. That would be terrific.

- [Ed] The productivity things, I think, again, just as you suggest, we're gonna have to prove it. Now, we've proven with tele-triage that despite people saying it's impossible that a remote tele-triage person can do 20 visits in an hour, we've proven that can happen. We've proven that the orders are no different than if you were there onsite. It's just faster because you don't get distracted like you do when you're onsite. So, you know, similarly, I think we're gonna find that some of these types of tools can help us with patient flow and productivity of physicians. But I think the short-term answer to your return on investment that we're looking at with this program is all based on risk mitigation. Other questions?

- [Emily] Yeah, Ed, thanks for this presentation. And it's just, for sort of the group that's here that's sort of early-ish adopters to sorta see what sort of that next thing on the horizon is, is really, I think, helpful for us. And one question that came in the chat from Satah is sort of the perpetuating potential biases and how, I mean, I know you have certain data sets and there are certain data sets that have maybe, I think I remember when you presented this before, a lot of it's more from the Northeast or something, so it may not be as generalizable. So how are you guys trying to mitigate bias in your system?

- Great question. The data scientists wring their hands about this issue all the time. And the source of the data can certainly influence the algorithm, as I call it, that gets generated by the model. And so it's gonna be more applicable if you're, if all the data's coming from the Northeast, then your algorithm is gonna be more applicable to the Northeast. It's not only a geographic issue, there's a time issue, too. If you think about it, if we just use data from the last few years, it's gonna be biased, in some respects, by all the influences from the global pandemic. Whereas as the pandemic goes away, those percentages and the results are gonna be shifting. I think what you need to do is just acknowledge that the model needs to be retrained periodically. We anticipate if you roll out a program like this, you wanna retrain that model with data from your own local hospital over time. And the idea is to have that local data influence the model and the outputs over time. We're still in the early days of sorting this out. I don't have all the answers to that, but it's a very reasonable question and it's something that we still need to do more work around.

- [Emily] As a follow up question to that, too, I know when I was listening to, one of my podcasts that I've liked lately was the "How to Citizen," and it was talking about Twilio, who was putting out there the idea of a nutrition facts label, just like we have for when we go the cereal aisle and see what's actually in our cereal, the same for AI. And so I'm trying to imagine that, as an emergency medicine physician, whether I'm doing telehealth or otherwise, and now being presented with this intelligent facesheet. And so if you were in any of our other colleagues' shoes, how would we, I mean, I know we're still early on this, but sort of what are some of those questions we should be asking before we start doing this? One of this is sort of what's the data set that it was worked on, but like are there other things we should be trying to start putting in our minds about how we should intelligently consume this type of AI-generated data?

- [Ed] Again, I think clinicians need to be responsible for their own decisions, and you have all sorts of different inputs during the course of making your decisions, from patients, from family members, from EMS providers and from your own historical experiences. It actually relates back to one of the comments Judd made about getting FDA approval from this. I mean, we don't need FDA approval to put a textbook, Tintinalli textbook next to our desk at the triage desk and say, "I'm gonna look something up 'cause I wanted to check on this." And if this is just a faster, better way to get good information about a given case and how we might wanna approach that case, it's just one of the many inputs that we consider as physicians.

- [Emily] Sounds similar to Judd, you saying it doesn't matter where the care is happening, if it's up on the fifth floor in the cardiology clinic or over telehealth, it's still care, so this is still just information.

- [Judd] And I agree exactly with what Ed said. You know, compared to having Tintinalli or UpToDate. And when we've discussed this at Jefferson, I don't know which way we'll go in the end. I've been a strong advocate that we don't need to disclose if we're, you know, if AI is writing a patient portal message, we don't need to disclose it was written by AI, provided the doctor's reviewing it and editing it. 'Cause the doctor doesn't say, "I looked online and UpToDate or Google or whatever." The doctor has primary responsibility for it. I think the, so the difference to me, is, you know, AI's a little more of a black box. In fact, it's a lot more of a black box. So as much as people love it, adoption may or may not be slower when that happens. Certainly, our institution will be more of afraid of it being, you know, unvetted and the hallucinations trickling back to the patient. But to me, the biggest difference is gonna be cost because the textbook, Tintinalli next to me at the desk or UpToDate on the computer, not that UpToDate is cheap if you have individual subscriptions, AI will actually end up being more expensive. It will end up being-

- [Ed] Yeah, I'll challenge that too, Judd. Think exponentially and not clearly.

- [Judd] So honestly, if you-

- [Ed] Short term, probably, but not in the long term.

- Right. And so then adoption is easier, right? Adoption of cheap stuff is faster than adoption of expensive stuff. So hopefully we'll get there. I'm totally supportive.

- [Ed] Yeah, and I, you know, as you think about having Tintinalli's textbook there, I mean, there are assertions in that textbook that we just accept because it's coming from an expert. And that's, traditionally, a lot of the things we accept in medicine is we're taught that by, quote, "experts." The AI stuff is based on data. And, you know, geez, you may be worried about pulmonary embolus in this patient, but the data's gonna show us there's a one in 20,000 case, you know, chance that this is a pulmonary embolus. Do they really need that, you know, CT with, CT angiography or not? So there's gonna be a lotta circumstances where I think the real data is gonna influence us in ways that are different from how we get influenced by expert opinions.

- [Emily] Mike Baker, you have your hand up for a question there.

- [Mike] Yeah, no, I love the work being done with this, in this space, and trying to see what we can do to incorporate it in emergency medicine and, you know, improve the care that we deliver, and also help with the efficiency of the care we deliver. I'm wondering if there's any limitations to what you're doing to allow the system to perhaps even, you know, pen orders for us. So instead of giving us the list of here's all the common diagnoses that you should be thinking about, have it put out a list of the common orders that we should be ordering, you know, the recommended workup for that patient. And then if we could, you know, incorporate that into our EHRs somehow, someday, we could then just, you know, approve or disapprove the recommendations of the AI, very much like we do with residents and PAs and nurse practitioners right now.

- [Ed] Thanks for that question, too. I think it probably wasn't clear, and I know I went through these slides really fast. The order considerations category here, where I say TBD, that will be a printout of what other doctors have ordered on a hundred patients like this, and the percentages. So 63% of 'em got a chest X-ray, 24% of 'em got a chest CT. You know, 50% of 'em got a urine dip for blood. All those kinda things. We just haven't done that yet because we haven't gotten the inputs yet to build the model around, that lets us predict the orders. I certainly have envisioned that then there should be an approval step because, again, the doctor needs to make the final decision, but you could easily, then, the next step be recommended orders for this patient. Do you agree? Do you not? Do you wanna edit 'em? Approve, click, and then, boom, it auto enters into the EMR. That would be a really cool thing to have. We're just not there.

- [Mike] Yeah, you know, and that would also save you on those mental mistakes that sometimes you make, where you just happen to forget to assign the order for the-

- [Ed] Forgot to order the pregnancy test, you know?

- Right.

- Yep. I'm tired, it's the middle of the night, I've been super busy. That's where we make mistakes. And it's not that you didn't know better, it's just you needed that little nudge.

- [Mike] Thank you.

- [Emily] Ed, I have another question for you. With the idea of thinking exponentially, it sounds like the data that you've been using has been emergency department data, but if we think outside of the emergency department, thinking about maybe on the inpatient side. And are there certain things that are ordered almost every single time on the hundred patients you have with back pain, that upstairs may have been something that we could have done earlier and should have done earlier or may not? Are you guys looking anywhere outside of the emergency department, box of the emergency department?

- [Ed] Great question, and I think it's a great idea. It's just, it's not been part of this project yet. And, again, we're still eating the elephant one bite at a time.

- Other questions?

- That's a natural next step kinda thing.

- [Emily] It's a lotta bites. A lotta bites beforehand, though, on the elephant, though.

- [Ed] Yeah, but we actually have, our partners at d2i actually have access to an awful lot of inpatient data as well. We're just not, we're not taking that bite yet.

- [Emily] Other questions? We're all used to thinking through things in ways that are a little bit more abstract than our colleagues thinking about it in the physical emergency department. So what are some other things that people, comments or thoughts or questions?

- [Ed] I just wanna mention one more time, and we've got a number of really talented people already on our advisory board and starting to contact us about potentially being beta sites, but there's been no decisions made around that. So if this is something you're interested in getting involved with, please reach back to me. I think my email address is on the invite, so you can get it there. If not, Emily can always get it for ya, too.

- [Emily] Ed, can you give a little bit of an idea of what being a beta site would actually entail, so people understand what that might mean?

- [Ed] It would first entail working with us to determine the answer to that question, but the general schedule we're targeting is having early versions of this thing being able to be deployed in the spring timeframe. And then probably a three-month kinda rollout with a formal evaluation of the impact, you know, over spring to summer timeframe.

- [Emily] And so beta site doesn't mean, 'cause I think one thing in my mind is this doesn't mean that we're signing our chief information officer up for sharing all of our hospital data, but you would need to link into our EHR to be able to individualize each of the patients.

- Correct. Correct, just to trigger the inference engine for any given patient, we would wanna automate that process of linking to the EMR. And that's generally gonna be done by a FHIR interface, which most EMRs support now. So, you know, we've already worked in Sandbox for both Cerner and Epic for being able to do a FHIR extract of, 'cause you're really only having to extract the chief complaint and the vital signs upfront.

- [Emily] Awesome. Oh, we have another question popping up here. Issues with data cleaning. I'm sure the data's completely clean, you don't have to worry about it, right? And then what process are you using to clean and validate the data, that said data that must not be clean?

- [Ed] So this is the, it's really the difference between some of the more traditional approaches to artificial intelligence and the large language model approaches, is that the more traditional approaches, there's lots of data cleansing and aggregation work, et cetera. You know, do you wanna have abdominal cramping as the free text phrase in the chief complaint line be considered the same as abdominal pain? Or should those be differentiated somehow? The large language models really streamline that and make it a whole lot easier, so that things like fell out of a tree and hurt their arm, typed out as a chief complaint, can be mapped very quickly into arm injury, which then is gonna drive your subsequent actions, which is what we all do in our brain all the time. It's just there hasn't been an easy way to map those text-based chief complaints that have all sorts of typos and other kinda weird character mismatches, get 'em mapped into a systematic way of interpreting them. The large language models make that step a whole lot easier.

- [Steve] Hey, my name is Steve Hardy. Nice to meet you. That was a great talk. I got into this field 10 years ago 'cause I used to, you know, use Netflix and think, you know, why can't the EMR suggest my order sets just like Netflix suggests my movies? So it sounds like it's getting closer. What do you think about the different institutions who come on as beta sites? Like, how they're gonna handle, you know, kinda the ethical ramifications, the IRB approval process, like kinda the red tape around that.

- [Ed] Yeah, and I think it's gonna vary depending on the institutions. Some are a little more lax about that stuff than others. It's generally gonna be an IRB process where, you know, the pitch that I would make to an IRB is gonna be, "Hey, I can put Tintinalli's on the desk next to my triage desk. This is just another tool that's used the same way. And hopefully you can get expedited review by your IRB accordingly." And especially because all the decision-making remains in the hands of the physicians. We're not taking that away from them in any way. We're just trying to add an additional educational tool.

- [Steve] And I'm curious, too, but just about that earlier question about racial disparity data and, you know, the concerns that surround like generative AI and how, you know, that certain machine learnings can have like more systematic biases. And I know that's like a really hot topic at our institution right now in general, outside of artificial intelligence in general. So thoughts more about that? Like, did you guys specifically choose not to take racial data?

- [Ed] Yeah, we didn't wanna even have that be a part of the factor as one of the inputs. And our source data, because it's millions of records over lots of different emergency departments, gives us an ability to, and those emergency departments don't necessarily do things, or they're not supposed to do things differently based on race. But you really hope that the whole modeling exercise and the volume of data you're extracting from will get us past those kinda biases. Some of that needs to be analyzed, though. We're gonna learn a lot from this project.

- [Steve] Yeah, it'd be interesting. I mean, I could understand the strategic standpoint from not using it on the front end, you know, but from a kinda cleansing, testing, validation standpoint for retraining, it seems like that would be something, you know, to consider.

- [Ed] Mm-hm. I mean, analogous to that, we do get gender as part of our inputs. So, you know, are there male/female gender biases? It'll be interesting to see on that. Some of those biases should occur, right? I should be biased that pregnancy is more likely in females, but-

- [Steve] Yeah, and sometimes our EMR isn't even accurate with that, you know? I mean, in this transgender world, I mean, it's-

- [Ed] Or there's a typo on the input or something.

- [Steve] Yeah, like male to female transgenders that are, you know, halfway through their process. And, you know, their biologic sex versus their gender, and, yeah, it's a lot.

- [Emily] I'm cognizant of the time. We're closing in on the hour. Let's see, I'm trying to look here. Oh, Kathy asked, well, this is the last question, and then we're gonna have to close for the day. But account bouncebacks or final admission diagnoses, since we don't always get it right on the first ED visit. Yes. Ed, do you wanna respond to bouncebacks or that?

- [Ed] We have an ability, through a code that's added, to identify patients that have more than one visit in the data model. We haven't made that a priority yet, but we've talked about that being something we'll do down the road, is actually use this to try to get patterns of bouncebacks. It's just not part of the initial effort, so.

- All right, Ed.

- Hash code that will let us do that over time.

- [Emily] Awesome. Ed, thank you so much for presenting this. I think this was really great for everybody to wrap their heads around this a little bit more than we already may have. And-

- It didn't come across.

- I'm sure people-

- I'm optimistic.

- What did you say?

- If it didn't come across, I'm optimistic about this in the future. A lot of people are dreading the idea that AI is gonna take over the world, but I think it could be really cool.

- [Emily] Well, everybody was worried about that we were gonna try to do all of our patient interactions over video, and somehow never be able to take care of a patient with empathy again, so. So thank you for what you're doing. Thank you for presenting it to us, and in a way that is, I think, was quite understandable, so thank you. That was helpful. And I'm sure people will be reaching out to you about the beta sites. And really be curious to hear updates later on, when this is further along in the whole process, so. But thank you, everybody, for your questions, too. Sorry, Ed, I'll let you have the last word here before we close out for the day.

- [Ed] Oh, I just wanna say thanks for everybody for coming.

- [Emily] All right, everybody, enjoy the rest of your January. We will see you back here in February. One other thing to keep in mind, start brainstorming if there's things that, as a section, we wanna think about for educating the more general ACEP membership. There might be an opportunity to do some more education from our section, potentially, at ACEP Scientific Assembly. So start brainstorming. This is one of the topics, or others. And stay healthy, everybody. Stay outta the storm's way if you're in the middle of the rain, snow, on this whole like 1/3 of the country here. So take care, everybody.

[ Feedback → ]