Artificial intelligence (AI) is rapidly transforming medicine by supporting multiple tasks like generating clinical notes, detecting sepsis, answering medical queries, screening for cancer, analyzing clinical images, and predicting mortality or readmission. AI can also expand access to healthcare, particularly in underserved areas. For example, a Food and Drug Administration (FDA) approved AI system can autonomously diagnose diabetic retinopathy, which helps patients who may not easily access specialists. However, realizing the potential of AI requires integration of workflow, clinical adoption, robust system development, proper governance, and high-quality data. Patient trust is critical for the successful implementation of AI as it impacts the patient’s shared decision-making process, acceptance, and satisfaction. Previous studies show that low baseline patients’ trust in medical AI is mainly because of concerns regarding oversight, bias, and performance. W. Nicholson Price II et al. examine how clinician oversight, transparency, and governance of AI training and performance data may affect acceptance of AI and patient trust in healthcare systems.
A preregistered conjoint survey of 3,000 (mean age = 48±16 years, female = 54.8%, male = 44.5%, White = 61.9%) English-speaking US respondents was conducted through Verasight from December 11, 2024, to January 1, 2025. Participants were recruited by using both non-probability and probability techniques based on American Community Survey benchmarks. Respondents evaluated the hypothetical AI-assisted diagnosis scenarios, including a skin rash, and compared paired visits with six randomized attributes, like governance (local hospital certification, Mayo Clinic certification, US FDA approval), data quality, AI performance (specialists as well as general practitioners), and clinician presence. Participants selected their preferred visits, and briefly explained their choice, as well as rated trust in each diagnosis. Each respondent completed 6 tasks, generating 12 observation/respondents (36,000 observations) and assessing 12 visits. Statistical analysis used linear regression for the estimation of the association with patient trust and choice.
Out of 3,000 participants, 33% earned <$50,000/year, 42.4% were $50,000-$99,999/year, 15.1% were $100,000-$149,999/year, and 12.7% were >$150,000/year. Respondents strongly preferred clinician oversight, which increased the visit selection probability by 18.4% with average marginal component effects (AMCE) of 0.184 (95% confidence interval [CI]: 0.173-0.195, p <0.00025). AI performance had the largest influence, including performance equal to a specialist raised preference by 24.8% with AMCE of 0.248 (95% CI: 0.234-0.262, p <0.00025), whereas performance better than a specialist increased preference by 32.5% with AMCE of 0.325 (95% CI: 0.310-0.339, p <0.00025). AI performance comparable to a general practitioner increased preference by 19.1% with AMCE of 0.191 (95% CI: 0.177-0.205). Respondents also preferred AI trained on US population data with AMCE of 0.119 (95% CI: 0.106-0.131), whereas biased datasets showed no statistically significant difference.
Governance improved preference with FDA approval and Mayo Clinic certification, each increasing choice by AMCE of 0.111 (95% CI: 0.101-0.121), while local hospital certification had a smaller effect by AMCE of 0.078 (95% CI: 0.068-0.088). In open-ended responses, respondents most frequently cited clinician presence (22.67%) and AI performance (25.7%) as reasons for their choices.
This study’s limitations include assuming patients know AI is used and understand its attributes. The simplified hypothetical scenario, focus on a single clinical condition, an online, English-speaking population, and a survey design, may not fully reflect real-world clinical settings.
In conclusion, this study highlights that higher clinician oversight, strong governance, higher AI performance, and representative training data significantly influence patient trust and preference for AI-assisted medical care. This finding suggests that balanced implementation of these factors is essential for responsible AI integration and improved healthcare access.
Reference: Bracic A, Spector-Bagdady K, Towle S, Zhang R, James CA, Price WN. Factors for Patient Trust and Acceptance of Medical Artificial Intelligence. JAMA Netw Open. 2026;9(3):e260815. doi:10.1001/jamanetworkopen.2026.0815




