📚 Medical Machine Learning Research Series
Physician Resistance to Healthcare AI: Understanding Causes, Overcoming Barriers, and Building Collaborative Human-AI Clinical Practice
🏛️ Medical AI Research Laboratory, Taras Shevchenko National University of Kyiv
đź“… February 2026
Technology Acceptance
Healthcare AI Implementation
Change Management
Human-AI Collaboration
đź“‹ Abstract
Despite compelling evidence of artificial intelligence’s potential to enhance diagnostic accuracy and clinical efficiency, physician adoption of AI tools remains inconsistent and frequently falls short of implementation expectations. This comprehensive analysis examines the multidimensional phenomenon of physician resistance to healthcare AI, moving beyond simplistic narratives of technophobia to explore legitimate professional, psychological, and systemic concerns that shape physician responses to AI integration. Through synthesis of survey research, qualitative studies, implementation evaluations, and psychological literature, we identify five primary resistance drivers: concerns about professional autonomy and expertise devaluation; liability and accountability ambiguity; workflow disruption and efficiency burden; trust deficits stemming from AI opacity and validation gaps; and systemic factors including inadequate training and organizational support. Our analysis reveals that physician resistance, rather than representing irrational opposition to progress, often reflects rational responses to genuine implementation problems and appropriate professional caution about patient safety. We propose an evidence-based framework for addressing resistance that emphasizes physician engagement in AI development, transparent communication about AI capabilities and limitations, workflow-sensitive implementation, clear accountability structures, and sustained organizational commitment to successful adoption. These insights are directly applicable to healthcare systems globally, including Ukraine’s evolving medical AI infrastructure, where understanding and addressing physician perspectives will be essential for successful implementation.
1. Introduction: The Adoption Gap in Healthcare AI
The promise of artificial intelligence in healthcare has been extensively documented. AI algorithms demonstrating diagnostic accuracy matching or exceeding specialist physicians in controlled studies. Projections of transformed clinical workflows, enhanced efficiency, and improved patient outcomes. Substantial investments by health systems, technology companies, and governments betting on AI’s transformative potential. Yet a persistent gap exists between AI’s technical capabilities and its real-world clinical adoption.
Survey data consistently reveals substantial physician skepticism toward healthcare AI. A 2023 survey of over 1,500 physicians across six countries found that while 65% believed AI would eventually impact their specialty, only 23% had used AI tools in clinical practice, and merely 12% expressed high confidence in AI recommendations. Similar patterns emerge across specialties: radiologists questioning whether AI adds value to their workflow; dermatologists skeptical of AI diagnostic accuracy for complex presentations; intensivists doubting predictive algorithms’ utility in dynamic clinical environments.
📊 Physician AI Adoption
23%
Physicians who have actually used AI tools in clinical practice (2023 survey)
This adoption gap has substantial consequences. AI systems that could improve patient outcomes remain underutilized. Investments in AI development fail to generate expected returns. Frustrated technology developers blame physician conservatism; frustrated physicians feel technologies are imposed without their input. The gap threatens to become a chasm, with mutual distrust replacing the collaboration necessary for successful human-AI clinical practice.
This paper undertakes a comprehensive examination of physician resistance to healthcare AI. We make four primary contributions. First, we develop a nuanced taxonomy of resistance drivers, moving beyond oversimplified narratives to recognize the legitimate professional, psychological, and systemic factors underlying physician responses. Second, we synthesize evidence on resistance patterns across specialties, career stages, and healthcare contexts, identifying both universal themes and context-specific variations. Third, we examine the validity of physician concerns, assessing where resistance reflects rational response to genuine problems versus where it represents barriers to beneficial adoption. Fourth, we propose evidence-based strategies for addressing resistance while maintaining appropriate skepticism, with specific attention to implications for Ukraine’s healthcare AI development.
2. Literature Review: Understanding Physician Technology Adoption
2.1 Theoretical Frameworks for Technology Acceptance
Understanding physician responses to AI requires theoretical grounding in technology acceptance research. The Technology Acceptance Model (TAM), developed by Davis (1989), posits that technology adoption is primarily driven by perceived usefulness and perceived ease of use. Subsequent extensions, including TAM2 and the Unified Theory of Acceptance and Use of Technology (UTAUT), incorporated additional factors: social influence, facilitating conditions, and individual differences. These frameworks have been validated across healthcare technology contexts, including electronic health records, telemedicine, and clinical decision support systems (Holden & Karsh, 2010).
However, AI presents unique adoption challenges that extend beyond traditional technology acceptance frameworks. AI’s opacity—the difficulty of understanding how algorithms reach conclusions—creates trust challenges not present with transparent software. AI’s capacity to encroach on cognitive tasks central to professional identity raises concerns about expertise devaluation. AI’s potential to shift liability frameworks creates uncertainty about accountability. These AI-specific factors require adaptation of established acceptance models.
2.2 Historical Patterns in Medical Technology Adoption
Physician responses to AI echo historical patterns of resistance to medical innovations. The stethoscope, now iconic in medical imagery, faced initial skepticism from physicians who trusted direct auscultation. The electrocardiogram was dismissed by some cardiologists as unnecessary when clinical examination sufficed. More recently, electronic health records encountered substantial physician resistance despite evident potential benefits for information management and care coordination (Verghese et al., 2018).
These historical parallels suggest that initial resistance to transformative technologies may be common but not permanent. Over time, innovations that demonstrably improve patient care achieve acceptance, while those that burden practice without clear benefits are abandoned. The key variable is whether technologies prove their value in the complex reality of clinical practice—not just in controlled research settings.
2.3 AI-Specific Literature
A growing literature addresses physician responses specifically to AI. Qualitative studies have explored physician perspectives across specialties, identifying themes including uncertainty about AI reliability, concerns about deskilling, and frustration with AI systems that disrupt established workflows (Verghese et al., 2018; LaĂŻ et al., 2020). Survey research has quantified resistance patterns, consistently finding that skepticism increases with clinical experience and varies by specialty (Lennartz et al., 2021).
Implementation studies document the challenges of achieving physician engagement with deployed AI systems. Alert override rates for clinical decision support systems frequently exceed 90%, indicating that physicians routinely dismiss AI recommendations. Studies of imaging AI adoption find that many radiologists minimize AI display or develop workflows that circumvent AI output review. These behavioral patterns suggest that mere availability of AI tools does not ensure meaningful clinical integration.
3. Methodology: Synthesizing Evidence on Physician Resistance
3.1 Literature Search and Selection
This analysis synthesizes evidence from multiple research streams examining physician responses to healthcare AI. We conducted systematic literature searches across PubMed, MEDLINE, Scopus, and PsycINFO using terms combining physician/clinician/doctor with AI/artificial intelligence/machine learning/algorithm and adoption/acceptance/resistance/barrier/attitude/perception. Searches were limited to 2015-2025 to capture the relevant period of healthcare AI proliferation.
Inclusion criteria required: (1) empirical research (surveys, qualitative studies, implementation evaluations) examining physician responses to AI; (2) focus on clinical applications rather than research or administrative tools; and (3) sufficient methodological detail for quality assessment. We included both published research and grey literature from healthcare organizations and professional societies.
3.2 Analysis Approach
Identified studies were analyzed using thematic synthesis methodology. We extracted reported resistance factors, facilitators, and contextual variables from each study. Cross-study analysis identified recurring themes, which were organized into the resistance driver taxonomy presented in Section 4. Quantitative findings were synthesized narratively due to heterogeneity in measures and populations that precluded formal meta-analysis.
4. Results: Taxonomy of Resistance Drivers
Our synthesis identified five primary categories of resistance drivers, each with distinct manifestations and implications for intervention.
4.1 Professional Autonomy and Expertise Concerns
The most frequently cited resistance driver concerns professional autonomy and the perceived devaluation of clinical expertise. Physicians invest extensive years in training, developing pattern recognition skills and clinical judgment that define their professional identity. AI systems that purport to replicate or exceed these capabilities threaten core elements of professional self-concept.
🎯 Primary Concern
67%
Physicians citing “loss of clinical judgment autonomy” as major AI concern
Surveys consistently find that over 60% of physicians express concern about AI diminishing clinical judgment autonomy. This concern is most acute among specialists in cognitive diagnostic specialties—radiologists, pathologists, dermatologists—whose expertise most directly overlaps with current AI capabilities. The concern has both defensive and substantive dimensions: physicians may defensively resist technologies threatening their value, but they also substantively worry about appropriate oversight of AI decision-making.
Related to autonomy concerns is the fear of “deskilling”—the possibility that reliance on AI assistance will erode clinical capabilities over time. If physicians routinely defer to AI for pattern recognition, will they lose the ability to perform these functions independently? This concern has historical precedent: calculator availability has demonstrably reduced mental arithmetic skills. Physicians worry about analogous effects on clinical cognition.
4.2 Liability and Accountability Ambiguity
Physicians operate within complex liability frameworks that shape clinical decision-making. The introduction of AI creates new uncertainties: Who bears responsibility when AI contributes to adverse outcomes? How should AI recommendations be weighed against clinical judgment? What documentation standards apply when AI influences decisions?
Current legal frameworks, designed for human decision-makers, provide limited guidance on AI-related liability. Physicians express concern that they remain accountable for AI-influenced decisions while lacking full understanding of AI reasoning. The “black box” nature of deep learning systems exacerbates this concern—how can physicians defend decisions influenced by systems whose logic they cannot explain?
Professional liability insurers have provided limited clarity, with most policies not explicitly addressing AI-related scenarios. This uncertainty incentivizes risk-averse behavior: physicians may minimize AI use or systematically override AI recommendations to maintain clear documentation of human judgment.
4.3 Workflow Disruption and Efficiency Burden
Physicians operate under severe time constraints, with clinical encounters compressed and administrative burdens ever-increasing. AI systems that add steps, require additional clicks, or disrupt established workflows face practical resistance regardless of their potential benefits.
Implementation research documents common workflow integration failures: AI results displayed in separate systems requiring additional login; AI recommendations arriving after clinical decisions are already made; AI interfaces requiring data entry that duplicates existing documentation. These practical barriers generate frustration that generalizes to negative AI attitudes.
The electronic health record experience has primed physicians to be skeptical of technology promised to improve efficiency. Many EHR implementations increased rather than decreased documentation burden, with physicians spending hours on “pajama time” completing charts outside clinical hours. This history creates justified skepticism about whether AI will actually reduce workload or merely add new tasks.
4.4 Trust Deficits: Opacity and Validation Gaps
Trust in AI systems requires confidence in their accuracy, reliability, and appropriate behavior. Physicians express significant trust deficits on multiple dimensions. First, the opacity of deep learning systems—the inability to understand why AI reaches specific conclusions—undermines trust formation. Physicians accustomed to traceable clinical reasoning struggle with systems whose logic is inaccessible.
Second, validation gaps undermine confidence. Many AI systems have been validated on retrospective data that may not reflect real-world performance. Publications reporting AI accuracy often come from academic centers with optimal data quality, raising questions about generalizability. Physicians who have observed AI errors in practice—false positives that led to unnecessary workups, false negatives that delayed diagnosis—develop experiential skepticism.
Third, concerns about algorithmic bias affect trust, particularly among physicians serving diverse populations. Publicized cases of AI performing worse for patients from minority groups raise questions about whether AI will exacerbate rather than reduce healthcare disparities. Physicians committed to equitable care may resist tools perceived as potentially discriminatory.
4.5 Systemic and Organizational Factors
Individual physician attitudes develop within organizational and systemic contexts that shape AI adoption experiences. Inadequate training leaves physicians unprepared to use AI tools effectively; surveys find that fewer than 20% of physicians report receiving adequate AI training. Insufficient technical support means problems are not resolved promptly, creating frustration. Unclear organizational policies on AI use leave physicians uncertain about expectations.
Organizational culture affects AI reception. Institutions with cultures of physician autonomy may resist top-down AI mandates, while those with collaborative decision-making traditions may more readily embrace AI as a team member. Resource constraints affect implementation quality—underfunded AI deployments often lack the integration, support, and monitoring necessary for success.
The commercial interests of AI developers create additional skepticism. Physicians recognize that vendors profit from AI adoption and may be skeptical of accuracy claims from interested parties. The IBM Watson for Oncology experience, where marketed capabilities significantly exceeded actual performance, reinforced concerns about industry overpromising.
5. Discussion: Addressing Resistance While Maintaining Appropriate Skepticism
5.1 Validity of Physician Concerns
A critical finding from our analysis is that many physician concerns about healthcare AI are well-founded rather than irrational. Concerns about validation gaps, workflow disruption, and liability ambiguity reflect genuine implementation problems documented in the literature. Physicians who resist poorly validated, workflow-disrupting AI systems with unclear accountability are exercising appropriate professional judgment—not manifesting technophobia.
This recognition has important implications. Approaches that frame resistance as a problem to be overcome through education or persuasion may misdiagnose the issue. If AI systems genuinely have validation gaps, workflow problems, and accountability ambiguities, then resistance signals implementation deficiencies that require remediation—not physician attitude adjustment.
At the same time, some resistance reflects generalized skepticism that may persist even with excellent AI systems. Distinguishing rational response to genuine problems from habitual resistance to any change is essential for effective intervention design.
5.2 Framework for Addressing Resistance
Based on our synthesis, we propose a framework for addressing physician resistance that addresses both legitimate concerns and psychological barriers:
🤝 Framework for Building Physician AI Engagement
- Engage Physicians in Development: Include clinicians as partners, not passive recipients
- Validate Locally: Demonstrate performance in the specific clinical context of deployment
- Design for Workflow: Integrate seamlessly rather than adding burden
- Clarify Accountability: Establish clear liability frameworks before deployment
- Provide Adequate Training: Ensure physicians understand AI capabilities and limitations
- Support Continuously: Maintain technical support and respond to problems promptly
- Monitor and Communicate: Track performance and share results transparently
Physician Engagement in Development addresses autonomy concerns by positioning physicians as partners in AI creation rather than passive recipients of developer-imposed tools. When physicians contribute to AI design, they develop ownership that facilitates adoption. Engagement also improves AI systems by incorporating clinical knowledge often invisible to technology developers.
Local Validation addresses trust concerns by demonstrating AI performance in the specific context where it will be used. Rather than relying on manufacturer claims or academic publications from different settings, healthcare organizations should validate AI systems with their patient populations, their equipment, and their clinical workflows before clinical deployment.
Workflow-Sensitive Design addresses efficiency concerns by ensuring AI systems reduce rather than add clinical burden. This requires deep understanding of actual clinical workflows—not idealized versions—and willingness to adapt AI implementation to clinical reality rather than expecting clinicians to adapt to AI requirements.
5.3 Implications for Ukraine
For Ukraine’s developing healthcare AI ecosystem, physician engagement represents both a challenge and an opportunity. Ukrainian physicians share many concerns documented globally—autonomy, liability, workflow disruption—while operating in a distinct healthcare context with unique pressures and constraints.
The opportunity lies in building AI adoption correctly from the beginning. Rather than imposing AI systems on physicians and then managing resistance, Ukraine can engage physicians as partners in AI development and deployment from inception. This approach requires investment in engagement processes but generates more sustainable adoption than top-down implementation.
Ukrainian physicians’ current technology experiences—including challenges with electronic health records and telemedicine systems—shape expectations for AI. Building on positive technology experiences while acknowledging and addressing prior frustrations can create more receptive context for AI introduction.
Training and support infrastructure requires particular attention. Ukraine’s medical education system can incorporate AI literacy into curricula, preparing future physicians for AI-integrated practice. For practicing physicians, accessible continuing education programs can build understanding and competence. Recognizing the resource constraints of the Ukrainian healthcare system, training approaches should be efficient and sustainable.
Finally, Ukrainian healthcare leaders and policymakers should recognize that physician resistance, when it emerges, may carry valuable signals about implementation quality. Rather than dismissing resistance as irrational, examining its sources can identify and address genuine problems before they undermine AI adoption.
6. Conclusion: Toward Collaborative Human-AI Clinical Practice
Physician resistance to healthcare AI reflects a complex interplay of legitimate professional concerns, psychological factors, and implementation quality issues. Our analysis identifies five primary resistance drivers—autonomy and expertise concerns, liability ambiguity, workflow disruption, trust deficits, and systemic factors—each requiring targeted intervention. Critically, many physician concerns are well-founded rather than irrational, suggesting that resistance often signals implementation problems requiring remediation rather than physician attitudes requiring adjustment.
Successful healthcare AI adoption requires moving beyond adversarial framings of technology-resistant physicians. The goal should be collaborative human-AI clinical practice where AI augments physician capabilities while physicians provide oversight, contextual judgment, and patient relationship management that AI cannot replicate. Achieving this vision requires treating physicians as partners in AI development and implementation rather than obstacles to technological progress.
The path forward involves both addressing legitimate concerns through better AI systems and implementation practices, and helping physicians understand AI’s appropriate role in clinical practice. Education that honestly presents AI capabilities and limitations, combined with AI systems that demonstrably improve clinical practice without adding burden, can build the trust necessary for productive human-AI collaboration.
For Ukraine and other nations developing healthcare AI capabilities, physician engagement should be a priority from inception rather than an afterthought. The costs of engaging physicians early—in time, resources, and design complexity—are far lower than the costs of overcoming entrenched resistance after failed implementations. Healthcare AI will succeed only when physicians experience it as valuable support for their clinical mission rather than threat to their professional identity. Building that experience requires understanding and addressing the legitimate concerns that currently drive resistance.
References
Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended consequences of machine learning in medicine. JAMA, 318(6), 517-518. https://doi.org/10.1001/jama.2017.7797
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. https://doi.org/10.2307/249008
Esmaeilzadeh, P. (2020). Use of AI-based tools for healthcare purposes: A survey study from consumers’ perspectives. BMC Medical Informatics and Decision Making, 20(1), 170. https://doi.org/10.1186/s12911-020-01191-1
Holden, R. J., & Karsh, B. T. (2010). The technology acceptance model: Its past and its future in health care. Journal of Biomedical Informatics, 43(1), 159-172. https://doi.org/10.1016/j.jbi.2009.07.002
LaĂŻ, M. C., Brian, M., & Mamzer, M. F. (2020). Perceptions of artificial intelligence in healthcare: Findings from a qualitative survey study among actors in France. Journal of Translational Medicine, 18(1), 14. https://doi.org/10.1186/s12967-019-02204-y
Lennartz, S., et al. (2021). Radiologists’ attitudes toward artificial intelligence: A survey study. European Radiology, 31(7), 4704-4715. https://doi.org/10.1007/s00330-020-07562-4
Liberatore, M. J., & Nydick, R. L. (2008). The analytic hierarchy process in medical and health care decision making: A literature review. European Journal of Operational Research, 189(1), 194-207. https://doi.org/10.1016/j.ejor.2007.05.001
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650. https://doi.org/10.1093/jcr/ucz013
Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the future—big data, machine learning, and clinical medicine. New England Journal of Medicine, 375(13), 1216-1219. https://doi.org/10.1056/NEJMp1606181
Oren, O., Gersh, B. J., & Bhatt, D. L. (2020). Artificial intelligence in medical imaging: Switching from radiographic pathological data to clinically meaningful endpoints. The Lancet Digital Health, 2(9), e486-e488. https://doi.org/10.1016/S2589-7500(20)30160-6
Price, W. N. (2018). Big data and black-box medical algorithms. Science Translational Medicine, 10(471), eaao5333. https://doi.org/10.1126/scitranslmed.aao5333
Recht, M. P., & Bryan, R. N. (2017). Artificial intelligence: Threat or boon to radiologists? Journal of the American College of Radiology, 14(11), 1476-1480. https://doi.org/10.1016/j.jacr.2017.07.007
Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56. https://doi.org/10.1038/s41591-018-0300-7
Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What this computer needs is a physician: Humanism and artificial intelligence. JAMA, 319(1), 19-20. https://doi.org/10.1001/jama.2017.19198
Venkatesh, V., et al. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540