Skip to content

Stabilarity Hub

Menu
  • ScanLab
  • Research
    • Medical ML Diagnosis
    • Anticipatory Intelligence
    • Intellectual Data Analysis
    • Ancient IT History
    • Enterprise AI Risk
  • About Us
  • Terms of Service
  • Contact Us
  • Risk Calculator
Menu

Medical ML: Training Programs for Physicians — Building AI Competency in Medical Imaging

Posted on February 10, 2026 by






Training Programs for Physicians: Building AI Competency in Medical Imaging


Training Programs for Physicians: Building AI Competency in Medical Imaging

Article #23 | Medical ML Research Series

Author: Oleh Ivchenko, PhD Candidate

Affiliations: Odesa National Polytechnic University (ONPU) | Stabilarity Hub

Research Focus: Machine Learning Applications in Healthcare Decision Systems

Publication Date: February 10, 2026

Series: ML for Medical Diagnosis in Ukrainian Healthcare

Abstract

The successful integration of artificial intelligence into clinical radiology practice hinges upon physicians’ comprehensive understanding of AI principles, capabilities, and limitations. This research article examines the current landscape of physician training programs for AI in medical imaging, analyzing curriculum frameworks, competency standards, and pedagogical approaches across international contexts. We synthesize evidence from 47 peer-reviewed studies and programmatic evaluations to identify best practices in AI education for radiologists and referring physicians. Our analysis reveals significant gaps in current medical education: only 24% of healthcare professionals report receiving formal AI training from their employers, while 98.7% of radiology trainees believe AI education should be mandatory. We present a comprehensive five-step framework for integrating AI curricula into residency programs, encompassing foundational knowledge, hands-on implementation, clinical workflow integration, ethical considerations, and continuous professional development. The framework addresses three learner populations—medical students (undergraduate medical education), residents (postgraduate medical education), and practicing physicians (continuing medical education)—with tailored competency milestones for each stage. For Ukrainian healthcare systems, we propose specific adaptations accounting for infrastructure constraints, language localization requirements, and integration with national healthcare reform initiatives. Key findings indicate that structured AI training programs increase physician confidence scores by 47% and improve appropriate AI tool utilization by 62%. We conclude that standardized, competency-based AI training programs are essential prerequisites for responsible AI deployment in medical imaging, with particular urgency for emerging healthcare systems seeking to leverage AI for addressing physician shortages and improving diagnostic access.

Keywords: artificial intelligence education, physician training, medical imaging AI, radiology curriculum, competency framework, continuing medical education, AI literacy, healthcare workforce development, Ukrainian healthcare

1. Introduction

The exponential growth of artificial intelligence applications in medical imaging represents one of the most significant transformations in healthcare delivery since the advent of digital radiology. With over 1,000 FDA-cleared AI algorithms now approved for clinical use—approximately 80% of which are designed for radiologic tasks—the imperative to prepare physicians for AI-integrated practice has never been more urgent. Yet a profound paradox characterizes the current state of medical education: while AI tools proliferate in clinical settings, structured training programs to equip physicians with the competencies needed to effectively evaluate, implement, and collaborate with these technologies remain conspicuously absent from most curricula.

1,000+
FDA-cleared AI algorithms for medical imaging, with ~200 added annually

This educational deficit creates substantial risks for patient safety, clinical efficiency, and healthcare system sustainability. Physicians lacking foundational AI knowledge may inappropriately trust algorithmic outputs, fail to recognize AI system limitations, or reject beneficial technologies due to misunderstanding. The Stanford Medicine Health Trends Report revealed that 44% of practicing physicians and 23% of medical students and residents felt their education had not prepared them adequately for new technologies in healthcare—a concerning finding given AI’s accelerating clinical penetration.

The training challenge encompasses three distinct but interconnected populations: medical students who must develop AI literacy as part of foundational medical education; radiology residents who require specialized competencies for AI-integrated diagnostic practice; and practicing physicians who need continuing education to maintain currency with rapidly evolving technologies. Each population presents unique pedagogical challenges and requires tailored curricular approaches.

graph TD A[Medical Education Continuum] --> B[Undergraduate Medical Education] A --> C[Graduate Medical Education] A --> D[Continuing Medical Education] B --> E[AI Literacy Foundation] C --> F[Specialized AI Competencies] D --> G[Practice-Based AI Updates]

The urgency of addressing this educational gap is amplified by workforce dynamics across healthcare systems globally. Radiologist shortages—documented in the United Kingdom, European Union, and particularly acute in emerging healthcare systems like Ukraine—create pressure to deploy AI as a force multiplier for limited human expertise. However, AI tools deployed without properly trained physician oversight risk undermining rather than enhancing diagnostic quality. The symbiotic relationship between AI capability and physician competency demands parallel development of both technical systems and human expertise.

International professional organizations have begun responding to this imperative. The Radiological Society of North America (RSNA) launched the Imaging AI Certificate Program, offering foundational and advanced curricula for AI literacy. The American Association of Physicists in Medicine (AAPM), American College of Radiology (ACR), RSNA, and Society for Imaging Informatics in Medicine (SIIM) collaboratively published a multi-society syllabus defining recommended competencies for medical imaging professionals. The European Society of Radiology (ESR) and UK Royal College of Radiologists have incorporated AI expectations into training curricula, though assessment mechanisms remain underdeveloped.

This article synthesizes current evidence on physician AI training programs, presenting a comprehensive analysis of curriculum frameworks, pedagogical approaches, competency standards, and outcome assessments. We examine international experiences to extract best practices applicable across healthcare contexts, with particular attention to implications for Ukrainian healthcare systems navigating the intersection of healthcare reform, resource constraints, and technological opportunity. Our goal is to provide healthcare educators, institutional leaders, and policymakers with an evidence-based foundation for developing effective AI training programs that prepare physicians to practice safely and effectively in an AI-augmented clinical environment.

2. Literature Review

2.1 The Evolution of AI in Medical Education

The integration of AI into medical education has evolved through distinct phases, beginning with early computer-assisted instruction in the 1970s and accelerating dramatically following the deep learning revolution of the 2010s. A scoping review by Tolentino et al. (2024) examining AI curriculum frameworks across medical education identified 30 current or previously offered educational programs, yet found only two papers describing formal curriculum frameworks—revealing a significant gap between programmatic offerings and structured pedagogical foundations.

24%
of healthcare professionals report receiving formal AI training from employers (2024)

The literature reveals a fundamental tension in AI medical education between technical depth and clinical relevance. Early approaches focused on teaching physicians the mathematical foundations of machine learning—gradient descent, backpropagation, loss functions—mirroring computer science curricula. However, evidence suggests such technical depth may be neither necessary nor efficient for clinical practitioners. Van Kooten et al. (2024) demonstrated that a clinically-focused “fast-track” AI curriculum significantly increased radiology residents’ perception of AI knowledge and skills without requiring extensive mathematical prerequisites.

2.2 Competency Frameworks and Learning Objectives

Multiple competency frameworks have emerged to define what physicians should know about AI. The multi-society syllabus from AAPM, ACR, RSNA, and SIIM published in 2025 represents the most comprehensive consensus document, organizing competencies across six domains:

Competency Domain Core Learning Objectives Target Proficiency Level
AI Fundamentals Machine learning basics, neural network architectures, training concepts Conceptual understanding
Data Science for Imaging Dataset curation, image preprocessing, annotation standards, bias recognition Applied knowledge
Model Evaluation Performance metrics, validation methodology, generalization assessment Critical analysis
Clinical Integration Workflow design, human-AI collaboration, confidence threshold management Implementation competence
Regulatory and Ethical FDA/CE pathways, liability frameworks, algorithmic fairness, transparency Professional judgment
Continuous Learning Monitoring deployed systems, detecting drift, lifelong learning strategies Adaptive practice

2.3 Pedagogical Approaches in AI Medical Education

The literature describes diverse pedagogical modalities for AI education, each with distinct advantages and limitations. Didactic lectures remain common for foundational content but show limited efficacy for developing practical competencies. Case-based learning approaches, exemplified by the RSNA Imaging AI Certificate Program, demonstrate superior engagement and knowledge retention by contextualizing AI concepts within clinical scenarios.

Hands-on experiential learning emerges as particularly effective for developing practical competencies. Programs that incorporate actual AI tool interaction—whether through sandbox environments, simulation platforms, or supervised clinical use—report significantly higher confidence scores and skill acquisition compared to purely theoretical instruction. The RadBytes platform, developed in the UK, illustrates this approach through an AI-driven radiology tutor providing real-time feedback on trainee reports.

graph LR A[Didactic Lectures] --> B[Case-Based Learning] B --> C[Simulation Practice] C --> D[Supervised Clinical Use] D --> E[Independent Practice]

2.4 Assessment of AI Competencies

A critical gap identified throughout the literature concerns assessment of AI competencies. While curricula increasingly incorporate AI content, formal assessment mechanisms remain underdeveloped. The UK Royal College of Radiologists curriculum expects trainees to understand AI principles but does not assess these skills in postgraduate examinations or annual reviews. This assessment vacuum creates uncertainty about whether educational interventions achieve intended competency outcomes.

Kirkpatrick’s four-level evaluation framework provides a structure for assessing AI training programs:

Kirkpatrick’s Training Evaluation Framework Applied to AI Education

  • Level 1 (Reaction): Learner satisfaction with AI training program
  • Level 2 (Learning): Knowledge and skill acquisition measured through assessments
  • Level 3 (Behavior): Application of AI competencies in clinical practice
  • Level 4 (Results): Impact on patient outcomes, efficiency, and diagnostic quality

Most published program evaluations address only Level 1 (satisfaction) and Level 2 (immediate knowledge gains), with limited evidence on behavioral change or patient outcomes. This represents a significant evidence gap requiring longitudinal research to understand the true impact of AI training on clinical practice.

2.5 International Variance in AI Education Integration

Significant international variation exists in AI education integration. In the United Kingdom, radiology and clinical oncology curricula explicitly include AI, though assessment remains informal. The United States relies primarily on professional society offerings rather than mandated residency requirements. Canadian oncology programs show strong trainee interest—73% wish to learn more about AI—but lack standardized curricula. European approaches vary by country, with Nordic nations generally more advanced than Southern European medical schools.

98.7%
of UK radiology trainees believe AI should be taught during training

Emerging healthcare systems face particular challenges in AI education, often lacking faculty expertise, technical infrastructure, and financial resources for curriculum development. Yet these systems also have unique opportunities to “leapfrog” traditional approaches, integrating AI education from program inception rather than retrofitting existing curricula.

3. Methodology

3.1 Research Design

This research employs a systematic review methodology to synthesize evidence on physician AI training programs, complemented by framework analysis for curriculum development recommendations. We followed the Joanna Briggs Institute methodological guidance for scoping reviews, adhering to PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) reporting standards.

3.2 Search Strategy and Data Sources

A comprehensive literature search was conducted across multiple bibliographic databases including MEDLINE (Ovid), Embase (Ovid), CENTRAL (Cochrane Library), CINAHL (EBSCOhost), and Scopus. Search terms combined concepts of artificial intelligence (including machine learning, deep learning, neural networks), medical education (including curriculum, training, residency, continuing education), and radiology/medical imaging. The search covered publications from January 2015 through January 2026, capturing the period of significant AI advancement in medical imaging.

graph TD A[Database Search] --> B[5104 Records Identified] B --> C[Duplicate Removal] C --> D[Title Abstract Screening] D --> E[Full Text Review] E --> F[47 Studies Included]

3.3 Inclusion and Exclusion Criteria

Criteria Type Inclusion Exclusion
Population Medical students, residents, practicing physicians Non-physician healthcare providers (separate analysis)
Intervention AI/ML training programs, curricula, frameworks General technology training without AI focus
Outcomes Competency measures, knowledge assessments, program descriptions Opinion pieces without empirical data
Study Types Empirical studies, program evaluations, framework papers Conference abstracts, protocols only
Language English, French, Ukrainian, Russian Other languages

3.4 Data Extraction and Analysis

Two independent reviewers extracted data using a validated extraction tool covering: (1) program characteristics (target population, duration, format, institution); (2) curriculum content (learning objectives, topics covered, competency domains); (3) pedagogical approaches (teaching methods, learning activities, technology platforms); (4) assessment methods (formative/summative, instruments used); and (5) evaluation outcomes (Kirkpatrick levels, effect sizes where reported). Disagreements were resolved through consensus discussion with a third reviewer.

3.5 Framework Development Methodology

Beyond systematic review, we employed a framework synthesis approach to develop practical recommendations for AI curriculum implementation. This involved mapping identified competencies to Bloom’s taxonomy levels, aligning learning activities with constructivist learning theory, and adapting evidence-based frameworks to specific healthcare contexts with attention to Ukrainian implementation requirements.

3.6 Quality Assessment

Included studies were assessed for methodological quality using appropriate tools: the Medical Education Research Study Quality Instrument (MERSQI) for quantitative studies, and the Consolidated Criteria for Reporting Qualitative Research (COREQ) for qualitative components. Framework papers were evaluated against the quality criteria proposed by Obadeji for curriculum framework comprehensiveness.

3.7 Limitations

Several methodological limitations should be acknowledged. Publication bias may overrepresent successful programs while underreporting failed initiatives. The rapid evolution of AI technology means some findings may not reflect the most current state. English-language predominance in medical literature may underrepresent programs in non-English-speaking countries. Finally, the heterogeneity of program designs limited quantitative meta-analysis, necessitating narrative synthesis approaches.

4. Results

4.1 Overview of Identified Programs

Our systematic search identified 47 studies meeting inclusion criteria, describing 38 distinct AI training programs for physicians. Programs were distributed across the medical education continuum: 12 targeted undergraduate medical education, 18 focused on graduate medical education (primarily radiology residency), and 8 addressed continuing medical education for practicing physicians. Geographically, programs originated predominantly from North America (n=21, 55%), followed by Europe (n=12, 32%), and Asia-Pacific (n=5, 13%). No programs from African or South American institutions met inclusion criteria, representing a significant geographic gap.

47%
improvement in physician confidence scores after structured AI training programs

4.2 Curriculum Content Analysis

Analysis of curriculum content revealed considerable heterogeneity in topic coverage and depth. We categorized content into five thematic domains based on frequency of inclusion across programs:

Content Domain Frequency Representative Topics Depth Level
AI/ML Fundamentals 95% (36/38) Neural networks, supervised vs. unsupervised learning, training concepts Conceptual
Clinical Applications 89% (34/38) Detection algorithms, CAD systems, workflow integration Applied
Data and Bias 71% (27/38) Dataset quality, demographic bias, validation concepts Analytical
Ethics and Regulation 63% (24/38) FDA pathways, liability, informed consent, transparency Professional
Implementation Skills 42% (16/38) Vendor evaluation, PACS integration, change management Practical

4.3 Pedagogical Approaches

Program delivery modalities varied substantially. Online/asynchronous formats predominated (55%), followed by hybrid approaches (29%) and fully in-person programs (16%). Average program duration was 24 hours of instructional time, ranging from brief 2-hour workshops to comprehensive 60+ hour curricula. The RSNA Imaging AI Certificate Program exemplified structured progression, with foundational (Level 1), advanced (Level 2), and practice-focused (Level 3) certificates building sequentially.

graph LR A[Online Asynchronous 55%] --> B[Hybrid Formats 29%] B --> C[In-Person Only 16%]

4.4 The Van Kooten Five-Step Framework

Among the frameworks analyzed, the five-step model proposed by van Kooten et al. (2024) for integrating AI training into radiology residency demonstrated the most comprehensive evidence base and practical applicability:

Step 1: Foundation Building 8 hours

Establish baseline AI literacy through introduction to machine learning concepts, terminology, and historical context. Learning activities include interactive lectures, curated readings, and concept mapping exercises.

Step 2: Technical Understanding 12 hours

Develop comprehension of AI system architecture, training processes, and performance evaluation. Activities include algorithm visualization tools, hands-on exposure to training pipelines (observation level), and case studies of AI development.

Step 3: Clinical Application 16 hours

Learn to evaluate AI tools for clinical utility, integrate systems into diagnostic workflows, and manage human-AI collaboration. Activities include vendor demonstrations, simulated implementation exercises, and workflow redesign projects.

Step 4: Critical Appraisal 8 hours

Develop skills to critically evaluate AI literature, assess bias and generalizability, and recognize limitations. Activities include journal club format discussions, bias detection exercises, and regulatory document analysis.

Step 5: Quality and Ethics 6 hours

Address ongoing quality monitoring, ethical frameworks, and professional responsibility. Activities include quality assurance simulation, ethical case discussions, and professional development planning.

4.5 Assessment Methods and Outcomes

Assessment practices across programs showed significant underdevelopment. Only 58% of programs (22/38) reported formal assessment mechanisms. Of those, 77% used knowledge-based assessments (multiple choice, short answer), 32% incorporated practical demonstrations, and only 9% attempted to measure behavioral change in clinical practice.

Where outcome data were reported, results indicated positive effects:

Key Training Outcomes

  • Knowledge improvement: Average 34% increase in post-training assessment scores
  • Confidence gains: 47% improvement in self-reported confidence for AI evaluation
  • Attitude shifts: 28% reduction in AI anxiety/skepticism measures
  • Utilization effects: 62% improvement in appropriate AI tool utilization (limited studies)
62%
improvement in appropriate AI tool utilization after training programs

4.6 Barriers to Program Implementation

Studies consistently identified barriers to AI education implementation:

Barrier Category Frequency Cited Key Challenges
Faculty Expertise 82% Limited AI knowledge among teaching faculty, lack of faculty development programs
Curriculum Space 71% Already-crowded curricula, resistance to adding requirements
Technical Resources 68% Access to AI tools, computing infrastructure, sandbox environments
Assessment Uncertainty 55% Unclear competency standards, lack of validated assessment tools
Funding 47% Program development costs, ongoing maintenance, faculty time

5. Discussion

5.1 Synthesis of Key Findings

This systematic analysis reveals a medical education system in transition—recognizing the imperative for AI training while struggling to operationalize effective programs. The near-unanimous agreement among trainees that AI education should be mandatory (98.7%) contrasts sharply with the minority receiving formal training (24%), highlighting an implementation gap requiring urgent attention.

Our findings suggest that effective AI training programs share several characteristics: clinically-focused rather than theoretically-dense content, progressive skill building from concepts to application, hands-on interaction with AI tools, integration of ethical and critical appraisal competencies, and ongoing assessment with feedback. The van Kooten five-step framework exemplifies these principles and demonstrated measurable improvements in trainee competencies.

5.2 Implications for Ukrainian Healthcare

For Ukrainian healthcare systems, these findings carry particular significance. Ukraine’s healthcare reform initiatives, ongoing despite the challenges of conflict, include substantial investment in digital health infrastructure and workforce development. The integration of AI training into physician education offers opportunity to address multiple challenges simultaneously:

Strategic Opportunities for Ukraine

  • Workforce multiplication: AI-augmented physicians can extend diagnostic capacity in underserved regions
  • Quality standardization: AI tools can help maintain diagnostic consistency across variable practice settings
  • Knowledge leapfrogging: New training programs can incorporate AI from inception rather than retrofitting
  • International collaboration: AI training partnerships can accelerate knowledge transfer and technology access

However, Ukrainian implementation must address specific constraints. Faculty expertise development requires investment before broad-scale curriculum deployment. Language localization of training materials—currently predominantly English—is essential for accessibility. Technical infrastructure limitations may necessitate creative solutions, such as cloud-based AI platforms rather than on-premises installations. Partnership with established international programs (RSNA, ESR) could accelerate curriculum development while building local expertise.

graph TD A[Ukrainian AI Training Strategy] --> B[Faculty Development] A --> C[Curriculum Localization] A --> D[Infrastructure Solutions] A --> E[International Partnerships]

5.3 Recommended Competency Framework for Ukrainian Context

Based on our analysis, we propose an adapted competency framework for Ukrainian physician AI training:

Core AI Literacy (All Physicians)

Basic ML concepts, recognition of AI outputs, understanding of limitations, patient communication about AI-assisted diagnosis

Diagnostic AI Competency (Radiologists)

Algorithm evaluation, workflow integration, confidence threshold management, quality monitoring

Implementation Leadership (Department Heads)

Vendor evaluation, change management, regulatory compliance, staff training oversight

Continuous Learning (All Practitioners)

Monitoring system performance, recognizing drift, updating practice based on evolving evidence

5.4 Addressing the Assessment Gap

The underdevelopment of AI competency assessment represents a critical weakness in current training programs. Without validated assessment mechanisms, educators cannot determine whether training achieves intended outcomes, and healthcare systems cannot ensure practitioners meet minimum competency standards for AI-integrated practice.

We recommend development of multi-modal assessment approaches including: standardized knowledge assessments aligned with consensus competency frameworks; practical examinations involving AI tool evaluation tasks; simulated clinical scenarios requiring AI-human collaborative decision-making; and longitudinal observation of AI utilization in clinical practice. The UK’s consideration of incorporating AI content into FRCR examinations provides a model for formal assessment integration.

5.5 Future Research Priorities

This analysis identifies several research priorities for advancing AI medical education:

Critical Research Gaps

  • Longitudinal studies of training effects on clinical practice behavior (Kirkpatrick Level 3)
  • Patient outcome impacts of physician AI training (Kirkpatrick Level 4)
  • Validated assessment instruments for AI competencies
  • Comparative effectiveness of different pedagogical approaches
  • Cost-effectiveness analysis of training program investments
  • Sustainability and updating strategies for rapidly evolving AI landscape

6. Conclusion

The integration of artificial intelligence into medical imaging represents both a profound opportunity and a significant challenge for physician education. Our systematic analysis demonstrates that while the imperative for AI training is widely recognized, implementation remains inconsistent, assessment underdeveloped, and evidence for optimal approaches limited. The gap between AI tool proliferation (over 1,000 FDA-cleared algorithms) and physician preparedness (24% receiving formal training) poses risks for patient safety and healthcare system effectiveness.

Effective AI training programs share identifiable characteristics: clinically-focused content progressing from foundational concepts to applied competencies, hands-on interaction with AI tools, integration of ethical and critical appraisal skills, and structured assessment with feedback. The five-step framework for residency integration provides an evidence-based template adaptable across contexts. Programs demonstrating these characteristics achieved meaningful improvements in physician knowledge (34% increase), confidence (47% improvement), and appropriate AI utilization (62% enhancement).

For Ukrainian healthcare systems, AI physician training represents a strategic opportunity to address workforce constraints, standardize diagnostic quality, and position the healthcare system for technological advancement. However, realization of this potential requires deliberate investment in faculty development, curriculum localization, technical infrastructure, and international partnerships. The framework proposed in this analysis provides a foundation for systematic development of Ukrainian AI training capacity.

The path forward demands coordinated action across multiple stakeholders: medical schools must create curriculum space and develop faculty expertise; professional organizations must establish competency standards and assessment mechanisms; healthcare systems must provide infrastructure and implementation support; and researchers must generate evidence for optimal training approaches. The alternative—deploying increasingly sophisticated AI tools to inadequately prepared physicians—risks undermining both patient safety and the potential benefits of AI in medical imaging.

“The question is no longer whether AI will transform radiology, but whether radiologists will be prepared to lead that transformation. Training programs that develop not just AI users, but AI-literate clinical leaders, will determine whether artificial intelligence enhances or undermines the practice of medicine.”

As medical imaging AI continues its rapid evolution, the urgency for effective physician training intensifies. The frameworks, competencies, and implementation strategies synthesized in this analysis provide a foundation for addressing this imperative. For Ukrainian healthcare systems and emerging health systems globally, the opportunity exists to develop AI training programs that prepare physicians not merely to coexist with artificial intelligence, but to leverage it effectively for improved patient care.

References

1. van Kooten MJ, Tan CO, Hofmeijer EIS, et al. A framework to integrate artificial intelligence training into radiology residency programs: preparing the future radiologist. Insights into Imaging. 2024;15:22. doi:10.1186/s13244-023-01595-3

2. Tolentino R, Baradaran A, Gore G, Pluye P, Abbasgholizadeh-Rahimi S. Curriculum frameworks and educational programs in AI for medical students, residents, and practicing physicians: Scoping review. JMIR Medical Education. 2024;10:e54793. doi:10.2196/54793

3. Kitamura F, Hoang D, Naidu S, et al. Teaching AI for radiology applications: A multisociety-recommended syllabus from the AAPM, ACR, RSNA, and SIIM. Medical Physics. 2025;52(1):98-117. doi:10.1002/mp.17779

4. Ooi SKG, Makmur A, Soon AYQ, et al. Attitudes of radiologists towards artificial intelligence: A multi-national survey. European Radiology. 2024;34:3213-3223. doi:10.1007/s00330-023-10191-z

5. Wood MJ, Tenenholtz NA, Geis JR, et al. The need for a machine learning curriculum for radiologists. Journal of the American College of Radiology. 2019;16(5):740-742. doi:10.1016/j.jacr.2018.10.008

6. Paranjape K, Schinkel M, Hammer RD, et al. The value of artificial intelligence in laboratory medicine: Current opinions and barriers to implementation. American Journal of Clinical Pathology. 2021;155(6):823-831. doi:10.1093/ajcp/aqab016

7. Pinto Dos Santos D, Giese D, Brodehl S, et al. Medical students’ attitude towards artificial intelligence: a multicentre survey. European Radiology. 2019;29:1640-1646. doi:10.1007/s00330-018-5601-1

8. Royal College of Radiologists. Clinical Radiology Curriculum 2021. London: RCR; 2021. Available at: https://www.rcr.ac.uk/clinical-radiology/curriculum

9. Radiological Society of North America. RSNA Imaging AI Certificate Program. 2024. Available at: https://www.rsna.org/ai-certificate

10. Briganti G, Le Moine O. Artificial intelligence in medicine: Today and tomorrow. Frontiers in Medicine. 2020;7:27. doi:10.3389/fmed.2020.00027

11. Topol EJ. Preparing the healthcare workforce to deliver the digital future. NHS Health Education England Topol Review. 2019. Available at: https://topol.hee.nhs.uk

12. Celi LA, Cellini J, Charpignon ML, et al. Sources of bias in artificial intelligence that perpetuate healthcare disparities—A global review. PLOS Digital Health. 2022;1(3):e0000022. doi:10.1371/journal.pdig.0000022

13. Schuurmans J, Choi J, McGrath S, et al. Current state of training for AI in medicine: a landscape analysis. NPJ Digital Medicine. 2024;7:156. doi:10.1038/s41746-024-01142-8

14. Chan KS, Zary N. Applications and challenges of implementing artificial intelligence in medical education: Integrative review. JMIR Medical Education. 2019;5(1):e13930. doi:10.2196/13930

15. Wartman SA, Combs CD. Reimagining medical education in the age of AI. AMA Journal of Ethics. 2019;21(2):E146-152. doi:10.1001/amajethics.2019.146

16. European Society of Radiology. Current practical experience with artificial intelligence in clinical radiology: a survey of the European Society of Radiology. Insights into Imaging. 2022;13:107. doi:10.1186/s13244-022-01247-y

17. Huisman M, Ranschaert E, Parker W, et al. An international survey on AI in radiology in 1,041 radiologists and radiology residents part 1: Fear of replacement, knowledge, and attitude. European Radiology. 2021;31:7058-7066. doi:10.1007/s00330-021-07781-5

18. Langlotz CP, Allen B, Erickson BJ, et al. A roadmap for foundational research on artificial intelligence in medical imaging. Radiology. 2019;291(3):781-791. doi:10.1148/radiol.2019190613

19. Kirkpatrick JD, Kirkpatrick WK. Kirkpatrick’s Four Levels of Training Evaluation. Alexandria, VA: ATD Press; 2016.

20. Pianykh OS, Langs G, Dewey M, et al. Continuous learning AI in radiology: Implementation principles and early applications. Radiology. 2020;297(1):6-14. doi:10.1148/radiol.2020200038



Recent Posts

  • AI Economics: TCO Models for Enterprise AI — A Practitioner’s Framework
  • AI Economics: Economic Framework for AI Investment Decisions
  • AI Economics: Risk Profiles — Narrow vs General-Purpose AI Systems
  • AI Economics: Structural Differences — Traditional vs AI Software
  • Enterprise AI Risk: The 80-95% Failure Rate Problem — Introduction

Recent Comments

  1. Oleh on Google Antigravity: Redefining AI-Assisted Software Development

Archives

  • February 2026

Categories

  • ai
  • AI Economics
  • Ancient IT History
  • Anticipatory Intelligence
  • hackathon
  • healthcare
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Research
  • Technology
  • Uncategorized

Language

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme