US Experience: FDA-Approved AI Devices
Article #7 in Medical ML for Ukrainian Doctors Series
By Oleh Ivchenko | Researcher, ONPU | Stabilarity Hub | February 8, 2026
📋 Key Questions Addressed
- How has the US regulatory landscape shaped AI medical device development, and what does the current FDA approval landscape look like?
- What evidence exists for clinical effectiveness of FDA-approved AI devices, and where are the validation gaps?
- What lessons can Ukraine learn from the US experience implementing medical AI?
Context: Why This Matters for Ukrainian Healthcare
As Ukraine develops its regulatory framework for medical AI (aligned with EU MDR through recent reforms), understanding the world’s largest medical AI market provides invaluable lessons. The US FDA has authorized over 1,200 AI/ML-enabled medical devices—more than any other regulatory body—making it the de facto testing ground for medical AI deployment.
The FDA AI Approval Landscape: 2025 in Numbers
Explosive Growth in Authorizations
“`mermaid
xychart-beta
title “FDA AI/ML Device Authorizations Over Time”
x-axis [2015, 2019, 2022, 2023, 2024, 2025]
y-axis “Devices” 0 –> 1300
bar [40, 180, 520, 700, 950, 1200]
line [40, 180, 520, 700, 950, 1200]
“`
📈 Growth Rate
The authorization rate has grown at approximately 49% annually since 2016—reflecting both technological maturity and streamlined regulatory pathways.
Specialty Distribution
“`mermaid
pie showData
title FDA AI Approvals by Specialty (2024)
“Radiology” : 77
“Cardiology” : 9
“Neurology” : 3
“Hematology” : 2
“Other” : 9
“`
Radiology’s dominance reflects early digitization, abundant training data, and established PACS infrastructure—factors Ukrainian hospitals should consider when prioritizing AI adoption.
Functional Categories
The Regulatory Reality: How Devices Get Approved
The 510(k) Pathway Dominance
“`mermaid
graph LR
A[AI Medical Device] –> B{Pathway Selection}
B –>|97%| C[510k Pathway
B –>|2%| D[De Novo
B –>| F[No Independent
D –> G[Some Clinical Data]
“`
The Evidence Gap: A Systematic Review
A landmark 2025 JAMA Network Open systematic review of 723 FDA-authorized radiology AI devices revealed concerning gaps:
“Most AI/ML devices are used in conjunction with a human, yet only 56 were tested with any human operator. Most have not been validated against defined clinical or performance endpoints.”
— JAMA Network Open systematic review, 2025
Real-World Implementation: The Mayo Clinic Model
Mayo Clinic represents the gold standard for AI integration, currently using over 250 AI tools in clinical workflows:
🎯 Image Prioritization
Identifies highest-probability abnormal images
🔍 Incidental Detection
Scans for blood clots even off-focus
⚙️ PACS Integration
Embedded in existing systems
“A.I. is everywhere in our workflow now.”
— Dr. Felix Baffour, Mayo Clinic Radiologist (NYT, May 2025)
The Performance Heterogeneity Problem
A pivotal 2024 Nature Medicine study examined AI effects on 140 radiologists across 15 pathologies:
Market Leaders and Notable Devices
Top Companies by FDA Authorizations (2023)
Notable FDA-Approved Devices
🚨 High-Impact Triage Tools
- ContaCT (Viz.ai) – Stroke detection
- Aidoc BriefCase – Multi-pathology triage
- Caption AI – Echo guidance for non-specialists
🖼️ Image Enhancement
- SmartSpeed Precise (Philips) – MRI 50% faster
- TrueFidelity (GE) – CT reconstruction
- Allix5 (Clairity) – General image analysis
Challenges and Lessons Learned
Key Challenges Identified
“`mermaid
mindmap
root((FDA AI
Challenges))
Validation Gap
Less than 2% RCT support
Limited prospective testing
510k lacks clinical data
Generalizability
Training data bias
Single-site limitations
Equipment variations
Integration
PACS complexity
Workflow redesign
Change management
Monitoring
Weak post-market surveillance
Limited adverse event reporting
Algorithm drift concerns
“`
⚠️ Mayo Clinic’s Assessment
“Very few randomized, controlled trials have shown the safety and effectiveness of existing AI algorithms in radiology, and the lack of real-world evaluation of AI systems can pose a substantial risk to patients and clinicians.”
— Mayo Clinic Platform, April 2025
Practical Implications for Ukrainian Healthcare
What Works in the US Experience
- Start with workflow augmentation, not replacement: The most successful AI tools assist rather than decide
- Focus on high-volume, high-stakes use cases: Triage for stroke, PE, and trauma show clear value
- Integrate into existing PACS systems: Standalone AI tools see lower adoption
- Validate locally before deployment: FDA clearance does not guarantee local effectiveness
US vs Ukraine Comparison
ScanLab Integration Notes
🔬 For ScanLab Development
- Prioritize quantification features: 58% of FDA approvals are quantification tools (lower regulatory barrier)
- Build physician-in-the-loop from day one: Only 8% of FDA devices were tested with human operators—we can do better
- Plan for local validation: FDA clearance means little for Ukrainian patient populations
- Design for PCCP-style updates: Algorithm improvement should be architecturally supported
Conclusions: Original Insights
📊 The Paradox of Scale
The US has authorized 1,200+ AI devices but less than 2% have rigorous clinical evidence—quantity has outpaced quality assurance
⚠️ The 510(k) Loophole
Substantial equivalence to predecessors cannot ensure AI performs as claimed in real clinical settings
🎭 Performance Heterogeneity
AI doesn’t uniformly help all radiologists—it may widen the gap between high and low performers
✅ Integration > Algorithms
Mayo Clinic’s success with 250+ AI tools stems from disciplined implementation, not just FDA clearance
Questions Answered
✅ How has the US regulatory landscape shaped AI medical device development?
The 510(k) pathway’s dominance (97% of approvals) has enabled rapid market entry but created an evidence gap—most devices lack rigorous clinical validation.
✅ What evidence exists for clinical effectiveness?
Limited: only 5% underwent prospective testing, 8% included human-in-the-loop evaluation, and <2% have RCT support.
✅ What lessons can Ukraine learn?
Start with workflow augmentation, prioritize high-volume use cases, integrate into existing systems, and build local validation programs from the start.
Open Questions for Future Research
- How do AI devices approved under stricter pathways (De Novo, PMA) compare in real-world performance?
- What governance frameworks best support successful AI integration in resource-constrained settings?
- How should Ukraine’s emerging regulatory framework balance innovation incentives with clinical evidence requirements?
Next in Series: Article #8 – EU Experience: CE-Marked Diagnostic AI
Series: Medical ML for Ukrainian Doctors | Stabilarity Hub Research Initiative
Author: Oleh Ivchenko | ONPU Researcher | Stabilarity Hub
