Efforts to harness the potential of predictive models in healthcare have been substantial, with a focus on their application to improve patient outcomes. These models, designed to forecast patient health and guide clinical decision-making, have been integrated into healthcare systems and electronic health records EHRs.
Yet, their incorporation into clinical practice can influence patient outcomes, and these changes are recorded in EHRs. This interplay between predictive models and real-world clinical data can, in turn, affect the predictive accuracy of both existing and future models. To gain insight into the dynamics of predictive model performance in healthcare, a recent study conducted simulations across three common scenarios:Â
- In this scenario, predictive models were retrained after initial implementation.Â
- The study considered the impact of implementing one model after another sequentially. Â
- The third scenario simulated the simultaneous implementation of two models and examined how they affected each other’s predictive accuracy.Â
The simulations were conducted in critical care settings, accounting for varying levels of intervention effectiveness and clinician adherence. The study utilized data from admissions to the intensive care units (ICUs) of Mount Sinai Health System in New York and Beth Israel Deaconess Medical Center in Boston, comprising a dataset of 130,000 critical care admissions across both health systems.Â
The study evaluated the performance of these predictive models using statistical measures, including threshold-independent metrics such as the area under the curve (AUC) and threshold-dependent measures. The findings of the study shed light on the complex relationship between predictive models and clinical practice:Â
- In the first scenario, a mortality prediction model underwent retraining. At a fixed sensitivity of 90%, the model exhibited a decrease in specificity ranging from 9% to 39%. This suggests that, after retraining, the predictive accuracy of the model diminished.Â
- In the second scenario, a mortality prediction model experienced a loss in specificity ranging from 8% to 15% when it was developed after the implementation of an acute kidney injury (AKI) prediction model. This highlights how the introduction of new models can impact the performance of existing ones.Â
- In the third scenario, where models for AKI and mortality prediction were simultaneously implemented, each model led to a reduction in the effective accuracy of the other by 1% to 28%. This demonstrates that the coexistence of multiple predictive models can diminish their individual predictive abilities.Â
It’s important to note that the study had certain limitations. In real-world clinical practice, the effectiveness of and adherence to model-based recommendations are rarely known in advance, making it challenging to predict their impact accurately. Additionally, the simulations focused on binary classifiers for tabular ICU admissions data, which may not fully capture the complexity of real-world healthcare data.Â
The study’s findings suggest that there is no one-size-fits-all approach to maintaining predictive model performance in simulated ICU settings. Predictive model use in clinical practice may need to be closely monitored and recorded to ensure the continued viability and accuracy of these models.
As healthcare organizations increasingly adopt predictive models to support clinical decision-making, understanding how these models interact and affect each other’s performance is crucial for delivering high-quality patient care and improving outcomes. This research underscores the importance of ongoing evaluation and adaptation of predictive models in clinical settings to maximize their benefits and ensure the highest standard of patient care.Â
ReferenceÂ
Akhil Vaid, Ashwin Sawant, Mayte Suarez-Farinas, et al. Implications of the Use of Artificial Intelligence Predictive Models in Health Care Settings: A Simulation Study. Ann Intern Med. [Epub 10 October 2023]. doi:10.7326/M23-0949. Â


