Physician Resistance: Causes and Solutions
Article #12 in Medical ML for Ukrainian Doctors Series
DOI: 10.5281/zenodo.14822441
Abstract
The integration of artificial intelligence into clinical practice faces a critical bottleneck: physician resistance. Despite over $66 billion invested globally in healthcare AI, adoption remains stubbornly low. This article examines the multifaceted causes of physician resistance—spanning professional identity threats, liability concerns, and workflow disruption—and presents evidence-based strategies for transforming skepticism into engagement. Drawing on recent research including a landmark 2025 JMIR study of 498 physicians, we demonstrate that familiarity with AI, not demographics, determines acceptance, with experienced users showing 91% higher enthusiasm. For Ukrainian healthcare modernization and ScanLab deployment, understanding and addressing these resistance factors is essential for successful implementation.
Context: Why This Matters for Ukrainian Healthcare
Understanding physician resistance isn’t optional—it’s essential. Globally, despite over $66 billion invested in healthcare AI, adoption remains stubbornly low. For ScanLab and Ukrainian healthcare modernization, converting physician skepticism into informed engagement will determine success.
The Ukrainian healthcare system stands at a critical juncture. With ongoing infrastructure challenges, physician shortages in rural areas, and the need for rapid diagnostic capabilities, AI-assisted medicine offers transformative potential. However, this potential can only be realized if the physicians who must use these tools embrace them. Resistance at the clinical level has derailed countless AI implementations worldwide, regardless of technical excellence.
pie title Factors Driving Physician Resistance
"Autonomy Threat" : 67
"Liability Concerns" : 63
"Deskilling Fears" : 54
"Patient Relationship" : 52
"Job Displacement" : 48
"Black Box Opacity" : 47
The Resistance Spectrum: From Skepticism to Fear
Physician attitudes toward AI exist on a spectrum, and understanding this progression is crucial for targeted intervention. Each stage requires different approaches, and unaddressed concerns at earlier stages tend to escalate into more severe forms of resistance.
flowchart LR
A["🤔 Skepticism
(Mild)"] --> B["😟 Reluctance
(Moderate)"]
B --> C["😰 Anxiety
(Elevated)"]
C --> D["✋ Resistance
(High)"]
D --> E["😱 Fear
(Severe)"]
style A fill:#28a745,color:#fff
style B fill:#ffc107,color:#000
style C fill:#fd7e14,color:#fff
style D fill:#dc3545,color:#fff
style E fill:#721c24,color:#fff
The Root Causes: Intrinsic and Extrinsic Factors
Research has identified two broad categories of factors driving physician resistance: intrinsic factors related to professional identity and self-perception, and extrinsic factors related to patient care, systems, and external pressures. Understanding this distinction is crucial for designing effective interventions.
flowchart TB
subgraph INT["Intrinsic Factors (Professional Identity)"]
A1["Professional Autonomy
67% prevalence"]
A2["Deskilling Concerns
54% prevalence"]
A3["Job Displacement Fear
48% prevalence"]
A4["Competence Questions
41% prevalence"]
end
subgraph EXT["Extrinsic Factors (Patient Care & Systems)"]
B1["Liability Uncertainty
63% prevalence"]
B2["Patient Relationship
52% prevalence"]
B3["Black Box Opacity
47% prevalence"]
B4["Workflow Disruption
31% prevalence"]
end
INT --> R["Physician
Resistance"]
EXT --> R
style INT fill:#dc3545,color:#fff
style EXT fill:#ffc107,color:#000
style R fill:#6f42c1,color:#fff
Intrinsic Factors (Professional Identity)
Extrinsic Factors (Patient Care & Systems)
The Liability Paradox: Damned If You Do, Damned If You Don’t
One of the most significant barriers to AI adoption is the unresolved question of medical-legal liability. Physicians find themselves in a paradoxical situation where any choice—following AI, overriding AI, or not using AI—carries potential legal risk.
flowchart TB
D["Physician Decision Point"]
D --> F["Follow AI Recommendation"]
D --> O["Override AI"]
D --> N["Don't Use AI"]
F --> FR["AI Wrong → Liability for
'blind algorithmic following'"]
O --> OR["Physician Wrong → Liability for
'ignoring decision support'"]
N --> NR["Future Standard → Liability for
'failing to use available tools'"]
style D fill:#6f42c1,color:#fff
style FR fill:#dc3545,color:#fff
style OR fill:#dc3545,color:#fff
style NR fill:#ffc107,color:#000
⚖️ The Dilemma
| Follow AI (AI wrong) | → Potential liability for blind algorithmic following |
| Override AI (physician wrong) | → Potential liability for ignoring decision support |
| Fail to use AI | → Future liability as AI becomes standard of care |
“IT staff reported being asked by worried physicians about what would happen if they diverged from the CDSS recommendation (and struggled to answer, as the legal framework is unclear).”
— Oxford Medical Law Review, 2023
This legal ambiguity creates a chilling effect on adoption. Even physicians who intellectually appreciate AI’s potential benefits may hesitate to use it when the liability implications remain undefined. Regulatory clarity is urgently needed, and healthcare institutions should work proactively to establish internal guidelines that protect physicians while encouraging appropriate AI use.
The Familiarity Factor: Experience Transforms Attitudes
A landmark 2025 JMIR study surveying 498 physicians revealed what may be the most actionable finding in the field: familiarity with AI is the strongest predictor of acceptance, far outweighing demographic factors like age or specialty.
+91%
Higher enthusiasm (familiar vs. unfamiliar physicians)
+59%
Lower skepticism (familiar vs. unfamiliar physicians)
This finding has profound implications for implementation strategy. Rather than targeting “younger, tech-savvy” physicians—a common but misguided approach—successful AI deployment should focus on creating positive first experiences with AI tools across all demographic groups. The data suggests that even resistant senior physicians can become advocates once they gain hands-on familiarity with well-designed systems.
What Works: Evidence-Based Solutions
1. Early Physician Engagement
The most successful AI implementations involve physicians from the earliest stages, not as passive recipients but as active participants in the selection, design, and evaluation process.
gantt
title Physician Engagement Timeline
dateFormat YYYY-MM-DD
section Planning
Needs Assessment :a1, 2026-01-01, 30d
Vendor Evaluation :a2, after a1, 45d
section Implementation
Pilot Design :b1, after a2, 30d
Pilot Testing :b2, after b1, 60d
section Deployment
Staged Rollout :c1, after b2, 90d
Monitoring & Feedback :c2, after c1, 180d
2. Prioritize Explainable AI
✅ Explainable AI
- Physician can see why AI flagged finding
- Can challenge basis for decisions
- Higher liability comfort
- Enables learning, not just following
- Facilitates patient communication
❌ Black Box AI
- Cannot review reasoning
- Blind acceptance or rejection
- Lower trust and higher anxiety
- Harder to explain to patients
- Creates liability concerns
3. Address the Psychological Progression
Different stages of resistance require tailored intervention strategies. A one-size-fits-all approach will fail because the underlying concerns vary significantly.
The Chief Physician Effect: Leadership Matters
📊 Unexpected Finding
Chief physicians showed significantly lower skepticism than residents (p=.01)
Strategic Implication: Engage chief physicians as AI champions. Their endorsement carries weight with junior staff and can accelerate department-wide adoption.
This counterintuitive finding—that senior physicians may be more receptive to AI than junior staff—challenges common assumptions about technology adoption. Chief physicians often have a broader perspective on clinical challenges, greater confidence in their own judgment (making AI feel less threatening), and more experience with previous technology transitions. Leveraging their influence is crucial for successful implementation.
Implementation Framework for Ukrainian Healthcare
Based on the evidence reviewed, we propose a structured framework for addressing physician resistance in Ukrainian healthcare settings:
flowchart TB
subgraph P1["Phase 1: Foundation"]
A1["Identify Clinical Champions"]
A2["Establish Liability Guidelines"]
A3["Select Explainable AI Tools"]
end
subgraph P2["Phase 2: Pilot"]
B1["Small-Scale Testing"]
B2["Collect Physician Feedback"]
B3["Iterate on Workflow Integration"]
end
subgraph P3["Phase 3: Expansion"]
C1["Train Physician Trainers"]
C2["Staged Department Rollout"]
C3["Continuous Monitoring"]
end
P1 --> P2 --> P3
style P1 fill:#28a745,color:#fff
style P2 fill:#ffc107,color:#000
style P3 fill:#17a2b8,color:#fff
Key Success Factors:
- Start with willing departments—radiology and pathology often have higher AI familiarity
- Provide protected time for learning—rushed training breeds resistance
- Celebrate early wins—publicize cases where AI assisted diagnosis
- Create feedback channels—physicians must feel heard
- Address liability proactively—institutional guidelines reduce anxiety
Conclusions
✅ Experience > Demographics
Familiarity with AI predicts acceptance; age and specialty do not
🎓 Early Engagement
Involve physicians from selection through monitoring
⚖️ Clarify Liability
Undefined liability creates a chilling effect on adoption
🔍 Explainability Matters
Prioritize tools where physicians can see reasoning
Questions Answered
✅ What drives physician resistance? Professional autonomy threat (67%), liability uncertainty (63%), deskilling concerns (54%), and patient relationship impacts (52%).
✅ How does familiarity influence attitudes? Physicians familiar with AI show 91% higher enthusiasm and 59% lower skepticism. Age and specialty have no significant effect.
✅ What approaches work? Early engagement, explainable AI tools, hands-on training, addressing the psychological progression, and engaging senior physicians as champions.
Next in Series: Article #13 – The 2007-2012 Golden Age (Ancient IT)
Series: Medical ML for Ukrainian Doctors | Stabilarity Hub Research Initiative
Author: Oleh Ivchenko | ONPU Researcher | Stabilarity Hub
