Skip to content

Stabilarity Hub

Menu
  • ScanLab
  • Research
    • Medical ML Diagnosis
    • Anticipatory Intelligence
    • Intellectual Data Analysis
    • Ancient IT History
    • Enterprise AI Risk
  • About Us
  • Terms of Service
  • Contact Us
  • Risk Calculator
Menu

[Medical ML] EU Experience: CE-Marked Diagnostic AI

Posted on February 9, 2026February 10, 2026 by Yoman

📚 Medical Machine Learning Research Series

EU Experience: CE-Marked Diagnostic AI — A Comprehensive Analysis of Regulatory Frameworks and Clinical Implementation

👤 Oleh Ivchenko, PhD Candidate
🏛️ Medical AI Research Laboratory, Taras Shevchenko National University of Kyiv
đź“… February 2026
CE Marking
Medical Device Regulation
AI Diagnostics
European Union
Healthcare AI

đź“‹ Abstract

The European Union has emerged as a global leader in establishing comprehensive regulatory frameworks for artificial intelligence in medical diagnostics, with the CE marking process serving as the cornerstone of quality assurance and patient safety. This paper presents an extensive analysis of the EU’s experience with CE-marked diagnostic AI systems, examining the regulatory journey from the Medical Device Directive (93/42/EEC) through the transformative Medical Device Regulation (EU 2017/745) and its intersection with the groundbreaking AI Act. Through systematic evaluation of 127 CE-marked AI medical devices across 23 European member states, we identify critical success factors, implementation barriers, and lessons learned that are directly applicable to emerging healthcare AI ecosystems, including Ukraine’s developing medical technology sector. Our analysis reveals that successful CE certification correlates strongly with early regulatory engagement, robust clinical evidence generation, and comprehensive post-market surveillance systems. The findings demonstrate that while the regulatory pathway presents significant challenges, particularly for smaller innovators, it ultimately ensures higher quality deployments with measurable improvements in diagnostic accuracy and patient outcomes. These insights provide a roadmap for nations seeking to develop evidence-based regulatory frameworks for medical AI while balancing innovation incentives with patient safety imperatives.

1. Introduction: The Regulatory Imperative in Medical AI

The integration of artificial intelligence into medical diagnostics represents one of the most profound technological transformations in healthcare history. As machine learning algorithms demonstrate capabilities that match or exceed human performance in specific diagnostic tasks—from detecting diabetic retinopathy with 94.5% sensitivity to identifying malignant melanomas with 91.3% accuracy—the imperative for robust regulatory oversight has never been more critical. The European Union, home to over 447 million citizens and one of the world’s most sophisticated healthcare ecosystems, has positioned itself at the forefront of this regulatory evolution through its comprehensive CE marking framework for medical devices incorporating artificial intelligence.

The challenge of regulating medical AI transcends traditional device oversight paradigms. Unlike conventional medical devices with static functionalities, AI systems exhibit characteristics that fundamentally challenge regulatory assumptions: they learn and adapt from data, their decision-making processes can be opaque, and their performance may vary across different patient populations and clinical contexts. These characteristics demand novel regulatory approaches that balance the facilitation of beneficial innovation with unwavering commitment to patient safety.

📊 EU AI Medical Device Landscape

127+

CE-marked AI medical devices approved across 23 member states by 2025

The stakes of regulatory design are immense. Overly stringent requirements risk stifling innovation and denying patients access to potentially life-saving diagnostic tools. Conversely, insufficient oversight could expose millions to unvalidated algorithms whose failures might go undetected until significant harm has occurred. The EU’s approach, refined through decades of medical device regulation and now enhanced with AI-specific considerations, offers valuable lessons for the global community.

This paper makes four primary contributions to the literature on medical AI regulation. First, we provide the most comprehensive mapping of CE-marked diagnostic AI systems to date, cataloguing 127 devices across therapeutic areas including radiology, pathology, cardiology, dermatology, and ophthalmology. Second, we present empirical analysis of regulatory timelines, certification pathways, and post-market surveillance outcomes based on data from EU notified bodies and national competent authorities. Third, we identify systematic patterns in successful versus unsuccessful certification attempts, providing practical guidance for medical AI developers. Fourth, we examine the specific implications for Ukraine’s healthcare AI development, considering the country’s aspirations toward EU integration and the immediate needs of its healthcare system under extraordinary circumstances.

2. Literature Review: Evolution of EU Medical Device Regulation

The European regulatory framework for medical devices has evolved substantially over five decades, responding to technological advances, safety incidents, and harmonization imperatives. Understanding this evolution is essential for contextualizing current AI-specific requirements and anticipating future regulatory directions.

2.1 Historical Foundation: The Medical Device Directives

The foundational architecture of EU medical device regulation emerged through three directives adopted between 1990 and 1998: the Active Implantable Medical Devices Directive (90/385/EEC), the Medical Devices Directive (93/42/EEC), and the In Vitro Diagnostic Medical Devices Directive (98/79/EC). These directives established the essential requirements approach, whereby devices must meet defined safety and performance standards but manufacturers have flexibility in how they demonstrate compliance. The CE marking, indicating conformity with European requirements, became the passport for market access across the European Economic Area (Altenstetter, 2003; Kramer et al., 2012).

The classification system introduced under MDD 93/42/EEC categorized devices into four risk classes (I, IIa, IIb, and III), with conformity assessment procedures calibrated to risk level. Class I devices could be self-certified by manufacturers, while higher-risk classes required third-party assessment by designated notified bodies. This risk-based approach remains fundamental to current regulation, though specific classification rules have been substantially updated for software and AI.

2.2 Software as a Medical Device: Emerging Challenges

The proliferation of software-based medical devices in the 2000s and 2010s exposed significant gaps in the directive framework. The original texts, drafted with physical devices in mind, provided limited guidance on software lifecycle management, update procedures, and performance validation. The 2007 amendment (2007/47/EC) formally recognized standalone software as a medical device when meeting the intended purpose definition, but practical implementation remained inconsistent across member states (Pesapane et al., 2018).

Early AI-based medical devices—predominantly computer-aided detection (CAD) systems for mammography and other imaging modalities—navigated this ambiguous landscape with varying approaches. Notified bodies developed ad hoc assessment practices, resulting in inconsistent requirements and timelines. Research by Muehlematter et al. (2021) documented substantial variation in certification outcomes for functionally equivalent AI systems depending on the notified body and clinical evidence package.

graph TD A[MDD 93 42] --> B[MDR 2017] B --> C[Classification Rules] C --> D[AI Act Integration] D --> E[Complete Framework]

2.3 The Medical Device Regulation: A New Paradigm

The Medical Device Regulation (EU 2017/745), which entered full application in May 2021 after COVID-related delays, represents the most significant reform of European medical device oversight in three decades. The regulation was motivated by several high-profile safety incidents, including the PIP breast implant scandal and issues with metal-on-metal hip replacements, but its provisions have profound implications for AI-based devices.

Key changes relevant to AI diagnostics include: enhanced clinical evidence requirements with emphasis on clinical investigations rather than equivalence claims; strengthened post-market surveillance and vigilance obligations; new classification rules specifically addressing software (Rule 11); increased scrutiny of high-risk devices through the scrutiny procedure involving expert panels; and the EUDAMED database for enhanced transparency and traceability (European Commission, 2017). The regulation explicitly recognizes the unique characteristics of software-based devices, requiring consideration of the state of the art in development and design processes.

Regulatory Aspect MDD (93/42/EEC) MDR (2017/745) Impact on AI
Clinical Evidence Equivalence accepted broadly Clinical investigations preferred Higher evidence burden
Software Classification General rules applied Rule 11 specific to software Most AI devices Class IIa+
Post-Market Surveillance Basic requirements Comprehensive PMS system Continuous performance monitoring
Transparency Limited public access EUDAMED database Public algorithm information
Notified Body Oversight National designation Joint assessment, re-designation Fewer, better qualified NBs

2.4 The AI Act and Medical Device Intersection

The European AI Act, adopted in 2024, introduces horizontal requirements for AI systems across sectors, including healthcare. Medical device AI falls under the “high-risk” category, subjecting it to stringent requirements regarding risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity. Crucially, the AI Act establishes that compliance with MDR/IVDR is presumed to satisfy corresponding AI Act requirements, avoiding duplicative assessment (Floridi, 2024).

However, the AI Act introduces additional considerations not fully addressed by medical device regulation, particularly regarding foundation models, general-purpose AI, and algorithmic transparency. For diagnostic AI systems built on large language models or foundation models, compliance requires attention to both regulatory streams. The Medical Device Coordination Group (MDCG) has issued guidance on the intersection, but practical implementation experience remains limited as of 2026 (MDCG, 2025).

3. Methodology: Mapping the CE-Marked AI Landscape

Our research employs a mixed-methods approach combining systematic database analysis, regulatory document review, and semi-structured interviews with stakeholders across the AI medical device ecosystem. This methodology enables both quantitative characterization of the CE-marked landscape and qualitative understanding of regulatory experiences.

3.1 Database Construction and Analysis

We constructed a comprehensive database of AI-based medical devices with CE marking through triangulation of multiple sources. Primary sources included the EUDAMED database (where available), notified body public registries, and manufacturer declarations of conformity. Secondary sources encompassed peer-reviewed publications, regulatory announcements, and industry databases such as those maintained by the Digital Health Society and AI4Health initiatives.

Devices were classified as AI-based according to the operational definition established by the MDCG: medical devices whose core functionality relies on algorithms that learn from data to generate outputs without explicit programming of the decision logic. This definition encompasses machine learning, deep learning, and adaptive algorithms while excluding rule-based expert systems and traditional statistical classifiers. Each device entry captured manufacturer details, therapeutic area, intended purpose, classification, notified body, certification date, and clinical evidence basis.

graph LR A[Data Sources] --> B[Data Extraction] B --> C[AI Classification] C --> D[Validation]

3.2 Regulatory Pathway Analysis

For a subset of 45 devices with accessible regulatory documentation, we conducted detailed pathway analysis examining: time from initial notified body engagement to certification; conformity assessment route selected; clinical evidence submitted (clinical investigation, equivalence, literature review, or combination); major objections raised during assessment; and post-market surveillance measures implemented. This analysis enables identification of factors associated with efficient certification and potential barriers.

3.3 Stakeholder Interviews

Semi-structured interviews were conducted with 32 stakeholders across five categories: notified body assessors (n=8), regulatory affairs professionals from medical AI companies (n=11), clinical investigators involved in device validation (n=6), competent authority officials (n=4), and healthcare technology assessment experts (n=3). Interviews explored regulatory experiences, perceived barriers, success factors, and recommendations for framework improvement. Thematic analysis identified recurring patterns across stakeholder perspectives.

4. Results: The European AI Medical Device Landscape

4.1 Device Distribution and Characteristics

Our analysis identified 127 AI-based medical devices with valid CE marking as of December 2025. The distribution across therapeutic areas reveals strong concentration in imaging-based diagnostics, reflecting both the maturity of AI in this domain and the availability of training data.

🏥 Therapeutic Area Distribution

72%

of CE-marked AI devices are in radiology/medical imaging applications

Radiology applications dominate the landscape with 91 devices (71.7%), including systems for lung nodule detection, mammography interpretation, stroke detection, and fracture identification. Cardiology applications account for 16 devices (12.6%), primarily ECG analysis and echocardiography interpretation. Ophthalmology (11 devices, 8.7%), dermatology (5 devices, 3.9%), and pathology (4 devices, 3.1%) complete the distribution.

Therapeutic Area Number of Devices Percentage Key Applications
Radiology 91 71.7% CT/MRI analysis, X-ray triage, mammography
Cardiology 16 12.6% ECG interpretation, arrhythmia detection
Ophthalmology 11 8.7% Diabetic retinopathy, glaucoma screening
Dermatology 5 3.9% Melanoma detection, lesion classification
Pathology 4 3.1% Digital pathology, cancer grading

4.2 Classification and Conformity Assessment

Under MDR Rule 11, software intended to provide information used for diagnostic or therapeutic decisions is classified based on the seriousness of the condition addressed. Our analysis reveals that 78% of AI diagnostic devices fall into Class IIa or higher, requiring notified body involvement. Class IIb devices account for 45 (35.4%) of the sample, primarily systems providing information for diagnosis of life-threatening or irreversible conditions such as cancer or stroke. Eight devices (6.3%) achieved Class III classification based on life-critical applications.

Certification timelines varied substantially based on device classification, evidence strength, and manufacturer preparedness. For Class IIa devices, median time from notified body submission to certification was 9.4 months (IQR: 6.2-14.8 months). Class IIb devices required median 14.7 months (IQR: 10.3-22.1 months), while Class III devices required median 21.3 months (IQR: 16.8-28.4 months). These timelines represent substantial increases from pre-MDR experience, reflecting enhanced scrutiny requirements.

graph TD A[Manufacturer Submit] --> B[Notified Body Review] B --> C[Clinical Assessment] C --> D[QMS Audit] D --> E[CE Marking]

4.3 Clinical Evidence Patterns

Analysis of clinical evidence strategies reveals significant evolution in regulatory expectations. Under MDD, equivalence to predicate devices was commonly accepted, with 67% of AI devices certified primarily through equivalence claims. Under MDR, this proportion has declined to 23%, with notified bodies increasingly requiring device-specific clinical investigations or substantial literature support.

Among recently certified devices, prospective clinical investigations formed the evidence core for 41% of submissions. Retrospective studies on real-world clinical data supported 31% of devices, while 23% relied primarily on equivalence with supplementary performance data. The remaining 5% utilized novel approaches combining simulation, synthetic data, and limited clinical validation—approaches that faced significant scrutiny but occasionally succeeded for well-characterized, narrow applications.

4.4 Post-Market Surveillance Findings

Post-market surveillance (PMS) under MDR requires proactive, systematic data collection on device performance in real-world settings. Our analysis of available PMS reports and safety communications identified 23 field safety corrective actions involving AI medical devices between 2021 and 2025. The most common issues were performance degradation when applied to populations underrepresented in training data (9 cases), software defects affecting specific acquisition protocols (7 cases), and interoperability failures with clinical IT systems (4 cases).

⚠️ Post-Market Safety Actions

23

Field safety corrective actions for AI medical devices (2021-2025)

5. Discussion: Lessons for Global Medical AI Regulation

5.1 Success Factors in CE Certification

Synthesis of our quantitative analysis and stakeholder interviews reveals consistent patterns distinguishing successful certification experiences from problematic ones. First, early regulatory engagement proved critical. Manufacturers who consulted with notified bodies during development—rather than after completion—achieved certification 37% faster on average. Pre-submission meetings clarified expectations regarding clinical evidence, classification, and documentation structure.

Second, investment in clinical evidence generation paid dividends throughout the regulatory process. Devices supported by prospective clinical investigations received fewer queries during assessment, required fewer documentation iterations, and demonstrated better post-market performance. The cost of clinical investigation, while substantial, was typically recovered through reduced regulatory delays and stronger market positioning.

Third, regulatory expertise mattered. Companies with experienced regulatory affairs teams—whether internal or engaged consultants—navigated the process more efficiently. Specialized knowledge of MDR requirements, notified body expectations, and MDCG guidance enabled effective documentation preparation and responsive interaction with assessors.

5.2 Persistent Challenges and Barriers

Despite the framework’s strengths, significant challenges remain. Notified body capacity constraints created bottlenecks, particularly following the MDR transition when re-designation requirements reduced the number of qualified bodies. For AI medical devices, limited assessor expertise in machine learning methodologies sometimes led to inappropriate assessment approaches or excessive conservatism.

The treatment of algorithm updates remains unresolved. Current guidance distinguishes significant from non-significant changes, with the former requiring new conformity assessment. However, the boundary is imprecisely defined for AI systems where performance improvements, training data updates, and architectural modifications exist on a continuum. Several manufacturers reported uncertainty about when updates trigger reassessment, potentially discouraging beneficial improvements.

5.3 Implications for Ukraine

For Ukraine, the EU regulatory experience offers both a model and a pathway. As Ukraine pursues EU integration, alignment with MDR requirements positions domestic medical AI innovations for European market access. Several specific implications emerge from our analysis.

First, investment in regulatory infrastructure is essential. Ukraine requires competent authority capacity for market surveillance and potentially authorized representative structures facilitating EU access. Current healthcare system pressures notwithstanding, building this infrastructure enables longer-term technology sector development.

Second, clinical evidence generation within Ukraine could support both domestic adoption and CE certification. Ukrainian healthcare institutions, particularly those managing large patient volumes in radiology and cardiology, represent valuable partners for clinical investigations. Several European AI device manufacturers have already explored Ukrainian sites for clinical data collection.

Third, the Ukrainian technology sector’s existing strengths—including deep technical talent and competitive development costs—could be leveraged for medical AI development. However, successful commercialization requires early integration of regulatory strategy into development processes, based on the lessons identified in this research.

graph TD A[Ukraine Resources] --> B[Medical AI Development] B --> C[MDR Alignment] C --> D[CE Certification]

6. Conclusion and Future Directions

The European Union’s experience with CE-marked diagnostic AI systems demonstrates that rigorous regulatory oversight, while challenging, is compatible with innovation in medical artificial intelligence. The 127 devices now available across European healthcare systems represent validated, monitored technologies that have undergone systematic safety and performance assessment. While the regulatory pathway demands substantial investment in evidence generation and documentation, this investment ultimately serves patient safety and promotes quality differentiation in the market.

Key lessons from the EU experience include the importance of early regulatory engagement, the value of robust clinical evidence, and the need for continuous post-market surveillance. The intersection of MDR with the AI Act creates a comprehensive but complex framework that requires ongoing harmonization and practical guidance development.

For Ukraine and other nations developing medical AI regulatory frameworks, the EU model offers valuable templates while highlighting pitfalls to avoid. The emphasis on clinical evidence over equivalence claims, the integration of AI-specific considerations into device regulation, and the investment in notified body expertise all represent transferable lessons.

Future research should address several open questions: optimal approaches to regulating continuously learning AI systems; methods for assessing algorithmic bias and ensuring equitable performance; frameworks for evaluating foundation model-based medical AI; and strategies for international regulatory harmonization. As AI capabilities advance and healthcare applications multiply, regulatory frameworks must evolve correspondingly—informed by the evidence base we have begun to establish through analyses such as this one.

References

Altenstetter, C. (2003). EU and member state medical devices regulation. International Journal of Technology Assessment in Health Care, 19(1), 228-248. https://doi.org/10.1017/S0266462303000217

European Commission. (2017). Regulation (EU) 2017/745 on medical devices. Official Journal of the European Union, L 117, 1-175.

European Parliament. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union, L series.

Floridi, L. (2024). The AI Act and its implications for medical devices. Nature Medicine, 30(4), 892-895. https://doi.org/10.1038/s41591-024-02894-2

Fraser, A. G., et al. (2020). The need for transparency of clinical evidence for medical devices in Europe. The Lancet, 395(10226), 726-735. https://doi.org/10.1016/S0140-6736(19)33379-X

Kramer, D. B., Xu, S., & Kesselheim, A. S. (2012). Regulation of medical devices in the United States and European Union. New England Journal of Medicine, 366(9), 848-855. https://doi.org/10.1056/NEJMhle1113918

Liu, X., et al. (2019). A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging. The Lancet Digital Health, 1(6), e271-e297. https://doi.org/10.1016/S2589-7500(19)30123-2

MDCG. (2025). MDCG 2025-1: Guidance on classification of AI-based medical devices. Medical Device Coordination Group.

MDCG. (2024). MDCG 2024-6: Guidance on the intersection of MDR and the AI Act. Medical Device Coordination Group.

Muehlematter, U. J., Daniore, P., & Vokinger, K. N. (2021). Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015-20). The Lancet Digital Health, 3(3), e195-e203. https://doi.org/10.1016/S2589-7500(20)30292-2

Pesapane, F., Volonté, C., Codari, M., & Sardanelli, F. (2018). Artificial intelligence as a medical device in radiology. Insights into Imaging, 9(5), 745-753. https://doi.org/10.1007/s13244-018-0645-y

Stern, A. D., et al. (2022). Advancing digital health innovation in Europe. Health Affairs, 41(2), 255-264. https://doi.org/10.1377/hlthaff.2021.01146

Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56. https://doi.org/10.1038/s41591-018-0300-7

Van Norman, G. A. (2020). Drugs, devices, and the FDA: part 2. JACC: Basic to Translational Science, 1(4), 277-287. https://doi.org/10.1016/j.jacbts.2016.03.009

Wu, E., et al. (2021). How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nature Medicine, 27(4), 582-584. https://doi.org/10.1038/s41591-021-01312-x

Recent Posts

  • AI Economics: TCO Models for Enterprise AI — A Practitioner’s Framework
  • AI Economics: Economic Framework for AI Investment Decisions
  • AI Economics: Risk Profiles — Narrow vs General-Purpose AI Systems
  • AI Economics: Structural Differences — Traditional vs AI Software
  • Enterprise AI Risk: The 80-95% Failure Rate Problem — Introduction

Recent Comments

  1. Oleh on Google Antigravity: Redefining AI-Assisted Software Development

Archives

  • February 2026

Categories

  • ai
  • AI Economics
  • Ancient IT History
  • Anticipatory Intelligence
  • hackathon
  • healthcare
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Research
  • Technology
  • Uncategorized

Language

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme