Study Casts Doubt on Effectiveness of AI in Predicting Sepsis Risk

A new University of Michigan study found that proprietary artificial intelligence software intended as an early warning system for sepsis is unable to distinguish between high- and low-risk patients prior to treatment. The article appears in the NEJM AI journal. 

The tool, called the Epic Sepsis Model, is a component of Epic’s electronic medical record software, which, according to a CEO statement obtained by the Wisconsin State Journal, serves 2.5 percent of patients worldwide and 54 percent of patients in the United States. Every 20 minutes, it automatically creates estimations of the risk of sepsis in hospitalized patients’ data. The idea behind this is to help clinicians identify patients who may have sepsis before things become worse.  

Because sepsis has so many nebulous symptoms, it can be difficult to diagnose patients with infections and determine which ones can be discharged home with antibiotics and which ones may require longer stays in the intensive care unit. Tom Valley, an associate professor of pulmonary and critical care medicine, an ICU clinician, and a co-author of the study, stated, “We still miss a lot of patients with sepsis.  

A third of hospital mortality in the United States are caused by sepsis, and patient survival depends on prompt treatment. Though it doesn’t appear that AI predictions are currently gaining more insight from patient data than clinicians, it is hoped that they will play a significant role in making that happen.  
 
“We believe that certain medical information used by the Epic Sepsis Model may inadvertently convey a clinician’s suspicion that a patient is suffering from sepsis,” said Jenna Wiens, the study’s corresponding author and an associate professor of computer science and engineering.  

For example, blood culture tests and antibiotic treatments won’t be administered to patients until they begin to exhibit signs of sepsis. Such information might enable an AI to detect sepsis risks with extreme accuracy, but it might also reach medical records too late to enable doctors to begin treating patients sooner.  

In assessing the Epic Sepsis Model’s performance for 77,000 hospitalized adults at University of Michigan Health, the clinical branch of Michigan Medicine, the researchers found a discrepancy in the timing of when data is available to the AI and when it is most pertinent to clinicians.  

The researchers just needed to extract the data and carry out their analysis because the AI had previously estimated each patient’s risk of developing sepsis during routine surgeries at the medical facility. Sepsis affected over 5% of the patients.  

The team determined the likelihood that the AI would have given patients who had a sepsis diagnosis a higher risk score than those who had not received a diagnosis in order to assess the AI’s performance.  

Including the AI’s predictions from every point of the patient’s hospital stay, 87% of the time the AI was able to identify a high-risk patient properly. When using patient data collected prior to the patient meeting sepsis criteria, the AI was accurate just 62% of the time. The algorithm only gave greater risk ratings to 53% of patients who developed sepsis when predictions were limited to the time period before a blood culture was requested, which is possibly the most striking finding.  

The results imply that when the model was making predictions, it was cueing in on whether patients had gotten treatments or diagnostic testing. By then, doctors probably already suspect sepsis in their patients, so the AI forecasts won’t really matter.  

Donna Tjandra, a doctorate student in computer science and engineering and a co-author of the paper, stated, “When determining if the model is helpful to clinicians, we need to consider when in the clinical workflow the model is being evaluated.” “Evaluating the model with data collected after the clinician has already suspected sepsis onset can make the model’s performance appear strong, but this does not align with what would aid clinicians in practice.” 

Journal Reference  

Fahad Kamran et al, Evaluation of Sepsis Prediction Models before Onset of Treatment, NEJM AI (2024). DOI: 10.1056/AIoa2300032 

Latest Posts

Free CME credits

Both our subscription plans include Free CME/CPD AMA PRA Category 1 credits.

Digital Certificate PDF

On course completion, you will receive a full-sized presentation quality digital certificate.

medtigo Simulation

A dynamic medical simulation platform designed to train healthcare professionals and students to effectively run code situations through an immersive hands-on experience in a live, interactive 3D environment.

medtigo Points

medtigo points is our unique point redemption system created to award users for interacting on our site. These points can be redeemed for special discounts on the medtigo marketplace as well as towards the membership cost itself.
 
  • Registration with medtigo = 10 points
  • 1 visit to medtigo’s website = 1 point
  • Interacting with medtigo posts (through comments/clinical cases etc.) = 5 points
  • Attempting a game = 1 point
  • Community Forum post/reply = 5 points

    *Redemption of points can occur only through the medtigo marketplace, courses, or simulation system. Money will not be credited to your bank account. 10 points = $1.

All Your Certificates in One Place

When you have your licenses, certificates and CMEs in one place, it's easier to track your career growth. You can easily share these with hospitals as well, using your medtigo app.

Our Certificate Courses