When AI Finally Beats the Experts: DeepRare and the End of the Diagnostic Odyssey
DOI: 10.5281/zenodo.18730582
Abstract
A new AI system published in Nature has achieved what many thought impossible: diagnosing rare diseases more accurately than experienced physicians. DeepRare, developed by researchers led by Zhao et al., demonstrates 64.4% top-1 diagnostic accuracy compared to 54.6% for human experts with over a decade of clinical experience. Tested across 6,401 cases spanning 2,919 diseases, the system provides transparent, evidence-based reasoning chains validated at 95.4% factual accuracy by clinical specialists. This represents not incremental progress but a fundamental shift—the first computational system to surpass expert performance in the complex, high-stakes domain of rare disease diagnosis, potentially ending the five-year “diagnostic odyssey” that affects 300 million patients worldwide.
📌 The Source
Primary Source: Zhao, W. et al. (2026). An agentic system for rare disease diagnosis with traceable reasoning. Nature. DOI: 10.1038/s41586-025-10097-9
News Coverage: Lassmann, T. (2026). AI succeeds in diagnosing rare diseases. Nature News & Views, February 18.
Published: February 18, 2026
Journal: Nature (impact factor: among highest in science)
🎯 The Claim
DeepRare, an agentic AI system, can diagnose rare diseases more accurately than experienced physicians while providing transparent, verifiable reasoning for each diagnosis. The system achieves 57.18% average top-1 diagnostic accuracy across multiple datasets, and in head-to-head comparison with five rare disease physicians (each with 10+ years of experience), DeepRare scored 64.4% versus the clinicians’ 54.6%.
The system processes multi-modal patient data—free-text clinical descriptions, structured Human Phenotype Ontology (HPO) terms, and whole-exome sequencing (WES) data—to generate ranked differential diagnoses, each accompanied by transparent reasoning chains linking conclusions to verifiable medical evidence.
The Scale of the Problem
More than 300 million people worldwide live with rare diseases, defined as conditions affecting fewer than 1 in 2,000 individuals. With over 7,000 distinct disorders identified—approximately 80% genetic in origin—patients face what the medical community calls a “diagnostic odyssey” averaging 5.6 years, marked by repeated specialist consultations, misdiagnoses, and unnecessary interventions.
According to a 2024 EURORDIS survey of 13,300 patients across 104 countries, the average diagnostic journey spans nearly five years. A European Journal of Human Genetics study confirmed that in Europe, the average total diagnosis time is close to 5 years, with significant variance based on disease type and geography.
gantt
title The Diagnostic Odyssey: 5.6-Year Journey to Rare Disease Diagnosis
dateFormat YYYY-MM
section Patient Journey
Symptom onset & initial GP visits :a1, 2020-01, 6M
Multiple specialist referrals :a2, after a1, 18M
Repeated misdiagnoses & wrong treatments :a3, after a2, 12M
Genetic testing & advanced diagnostics :a4, after a3, 10M
Final diagnosis :milestone, after a4, 0d
section With DeepRare
Symptom onset :b1, 2020-01, 1M
AI-assisted diagnosis :b2, after b1, 1d
Verification & treatment plan :b3, after b2, 1M
Treatment initiated :milestone, after b3, 0d
🔬 The Evidence
Rigorous Multi-Center Validation
DeepRare was evaluated on 6,401 clinical cases from nine datasets, including seven public benchmarks and two in-house hospital datasets from Shanghai and Hunan, China. The datasets span three continents (Asia, North America, Europe) and cover 2,919 rare diseases across 14 medical specialties.
The evaluation stratified cases by diagnostic difficulty:
- Research papers (2,693 cases): Well-documented, relatively straightforward diagnoses
- Case reports (770 cases): Authentic but filtered cases of moderate difficulty
- Real clinical centers (3,100 cases): Complex, diverse real-world patients representing the highest diagnostic challenge
Performance Benchmarks
Compared against 15 baseline methods—including traditional bioinformatics tools (PhenoBrain, PubCaseFinder), general-purpose LLMs (GPT-4o, Claude 3.7 Sonnet, Gemini 2.0), reasoning-enhanced LLMs (o3mini, DeepSeek-R1), medical LLMs (Baichuan-14B), and other agentic systems—DeepRare achieved 57.18% Recall@1 and 65.25% Recall@3, outperforming the second-best method (Claude 3.7 Sonnet with reasoning) by 23.79% and 18.65% respectively.
On specific benchmarks:
- RareBench-MME: 78% top-1 accuracy (30% ahead of PubCaseFinder)
- MyGene2: 74% top-1 accuracy (35% ahead of second-best)
- Xinhua Hospital (multi-modal with WES): 69.1% top-1 accuracy vs. Exomiser’s 55.9%
- MIMIC-IV-Rare (real clinical data): 29% Recall@1, 37% Recall@3
Head-to-Head Against Human Experts
In the most compelling validation, DeepRare was tested against five rare disease physicians (each with 10+ years of clinical experience) on 163 cases from Xinhua Hospital. Both system and physicians received identical input—structured HPO terms extracted from free-text clinical narratives. Physicians could use search engines and reference materials but were prohibited from using AI tools.
Results:
- DeepRare Recall@1: 64.4%
- Physicians average Recall@1: 54.6%
- DeepRare Recall@5: 78.5%
- Physicians average Recall@5: 65.6%
graph TB
subgraph "Top-1 Accuracy (First Diagnosis Attempt)"
DR1[DeepRare
64.4%]
PH1[Physicians
54.6%]
end
subgraph "Top-5 Accuracy (Within 5 Diagnoses)"
DR5[DeepRare
78.5%]
PH5[Physicians
65.6%]
end
DR1 -.->|+9.8% advantage| PH1
DR5 -.->|+12.9% advantage| PH5
style DR1 fill:#2ecc71,stroke:#27ae60,stroke-width:3px
style DR5 fill:#2ecc71,stroke:#27ae60,stroke-width:3px
style PH1 fill:#e74c3c,stroke:#c0392b,stroke-width:2px
style PH5 fill:#e74c3c,stroke:#c0392b,stroke-width:2px
The study authors note: “This represents a landmark result: DeepRare is one of the first computational models to surpass the diagnostic performance of expert physicians in the complex task of rare-disease phenotyping and diagnosis.”
Transparent, Verifiable Reasoning
Ten associate chief physicians specializing in rare diseases evaluated 180 randomly sampled cases to assess the accuracy of DeepRare’s reasoning chains—the evidence and citations supporting each diagnosis. Each case was reviewed independently by three specialists.
Finding: 95.4% average reference accuracy, with median score of 100%. The system’s reasoning steps were deemed “both medically valid and traceable to authoritative sources.”
Error analysis of the remaining 4.6% revealed two categories: (1) hallucinated references—plausible but non-existent URLs (the LLM generated fake citations); (2) irrelevant references stemming from incorrect diagnostic conclusions.
Architectural Innovation: Agentic Systems Matter
DeepRare employs a three-tier agentic architecture inspired by the Model Context Protocol (MCP):
- Central host: LLM-powered coordinator (locally deployed DeepSeek-V3 by default) with memory bank
- Specialized agent servers: Phenotype extraction, genotype analysis, knowledge retrieval, case matching
- Heterogeneous data sources: PubMed, clinical guidelines, case repositories, genomic databases
graph TD
subgraph "Tier 1: Central Host"
LLM[LLM Coordinator
DeepSeek-V3]
MEM[Memory Bank
Diagnostic Context]
end
subgraph "Tier 2: Specialized Agents"
PHE[Phenotype
Extractor]
GEN[Genotype
Analyzer]
KNO[Knowledge
Retrieval]
CAS[Case
Matcher]
end
subgraph "Tier 3: Data Sources"
PUB[PubMed
Literature]
GUI[Clinical
Guidelines]
REP[Case
Repositories]
GDB[Genomic
Databases]
end
LLM <--> PHE
LLM <--> GEN
LLM <--> KNO
LLM <--> CAS
LLM <--> MEM
PHE --> PUB
GEN --> GDB
KNO --> GUI
CAS --> REP
LLM -->|Self-Reflection
Loop| LLM
style LLM fill:#3498db,stroke:#2980b9,stroke-width:3px
style MEM fill:#9b59b6,stroke:#8e44ad,stroke-width:2px
style PHE fill:#1abc9c,stroke:#16a085
style GEN fill:#1abc9c,stroke:#16a085
style KNO fill:#1abc9c,stroke:#16a085
style CAS fill:#1abc9c,stroke:#16a085
The system uses a self-reflective loop that iteratively reassesses hypotheses, reducing over-diagnosis and mitigating LLM hallucinations.
Ablation studies demonstrated dramatic improvements from the agentic architecture:
- GPT-4o alone: 25.60% average Recall@1
- GPT-4o + DeepRare agentic system: 54.67% (+28.56%)
- DeepSeek-V3 alone: 26.18%
- DeepSeek-V3 + DeepRare: 56.94% (+29.95%)
graph LR
A[Base LLM Only] --> B[GPT-4o
25.60%]
A --> C[DeepSeek-V3
26.18%]
D[+ Agentic System] --> E[GPT-4o + DeepRare
54.67%
+28.56%]
D --> F[DeepSeek-V3 + DeepRare
56.94%
+29.95%]
B -.->|Agentic
Architecture| E
C -.->|Agentic
Architecture| F
style B fill:#e74c3c,stroke:#c0392b
style C fill:#e74c3c,stroke:#c0392b
style E fill:#2ecc71,stroke:#27ae60,stroke-width:3px
style F fill:#2ecc71,stroke:#27ae60,stroke-width:3px
This confirms that the performance gains are not merely from better base models but from the orchestrated multi-agent workflow integrating case retrieval, web knowledge, and self-reflection.
💭 Our Take
This section represents our analysis and interpretation of the published research.
Finally, AI That Actually Works Better Than Experts
We’ve seen countless “AI beats doctors” headlines over the past decade, almost all of which crumble under scrutiny—cherry-picked datasets, narrow tasks (like detecting skin cancer from photos under controlled conditions), or studies that never make it past the press release phase into peer-reviewed journals.
DeepRare is different. It’s published in Nature, one of the most rigorous peer-reviewed journals in science. It was tested across 6,401 real-world cases from multiple continents and clinical centers. It beat experienced physicians in head-to-head comparison on complex diagnostic tasks requiring deep medical reasoning. And crucially, it provides transparent, verifiable reasoning chains—not black-box predictions.
Why This Matters: The Economics of the Diagnostic Odyssey
The five-year diagnostic odyssey isn’t just a medical tragedy—it’s an economic catastrophe. In the United States alone, rare diseases cost between $7 trillion and $9 trillion annually. A 2025 Orphanet Journal of Rare Diseases study analyzed the cost burden of diagnostic delays, finding that repeated referrals, unnecessary tests, and inappropriate treatments accumulate staggering expenses before patients ever receive correct diagnoses.
DeepRare could compress five years into five minutes. Not through magic, but through systematic orchestration of the same knowledge sources physicians use—medical literature, case databases, genomic annotations—executed with computational speed and consistency no human can match.
The Agentic Architecture Advantage
The most interesting technical insight isn’t that DeepRare uses an LLM—it’s how it uses one. The agentic architecture demonstrates what we’ve suspected: single-model inference, even from frontier LLMs, leaves massive performance on the table. DeepRare’s 29% improvement over its base LLM (DeepSeek-V3) comes from:
- Specialized tools: Phenotype extractors, variant annotators, case similarity engines
- External grounding: Real-time retrieval from PubMed, clinical guidelines, genomic databases
- Self-reflection loops: Iteratively validating and refuting hypotheses to reduce hallucinations
- Memory persistence: Building diagnostic context across multiple reasoning steps
This mirrors the Model Context Protocol’s vision: LLMs as coordinators, not oracles. The intelligence emerges from orchestration, not parameters.
The Transparency Paradox
One of DeepRare’s most critical features—and most overlooked—is its 95.4% validated reasoning chain accuracy. Physicians don’t just get a diagnosis; they get the why: references to specific papers, case reports, genetic databases, each step traceable and verifiable.
This flips the traditional AI explainability problem. Instead of trying to reverse-engineer opaque neural network decisions, DeepRare constructs its reasoning from human-readable evidence by design. Physicians can audit the logic, challenge the sources, and integrate their own clinical judgment.
The 4.6% error rate (hallucinated or irrelevant citations) is concerning but manageable. Physicians already deal with uncertainty—misdiagnoses, incomplete information, conflicting studies. A system that’s right 95.4% of the time and tells you exactly why it thinks what it thinks is vastly more useful than a black box that’s right 100% of the time (which doesn’t exist).
Limitations Worth Noting
The authors are commendably transparent about constraints:
- Phenotypic mimic diagnosis (38.5% of failures): Diseases with overlapping symptoms remain hard to differentiate without molecular testing
- Reasoning weight errors (41% of failures): The system sometimes overemphasizes non-specific features while underweighting pathognomonic findings
- No screening capability yet: DeepRare targets patients already suspected of having rare diseases, not initial symptom-based screening in primary care
- Limited patient interaction: The system can’t yet conduct iterative information gathering through conversational diagnosis
pie title DeepRare Failure Mode Analysis
"Phenotypic Mimics" : 38.5
"Reasoning Weight Errors" : 41.0
"Data Quality Issues" : 12.3
"Other" : 8.2
Notably, the authors tested DeepRare primarily on cases where patients had structured HPO terms or WES data—already significantly filtered inputs. Real-world deployment in primary care settings with unstructured symptom descriptions would likely see lower performance.
The Broader Trend: 2026 as the Year of Agentic AI
DeepRare fits into a larger pattern emerging in 2026: agentic systems are where the real gains are happening. We’re seeing it in coding (Devin, OpenHands), research (Sakana AI’s AI Scientist), and now medicine. The common thread: LLMs coordinating specialized tools, memory systems, and external data sources, with self-reflection loops and iterative refinement.
This suggests a fundamental shift in AI development strategy. The race for larger parameter counts and more training data may be hitting diminishing returns. The next frontier isn’t bigger models—it’s smarter orchestration.
⚖️ The Verdict
🟢 Solid
This is peer-reviewed research published in Nature, tested across 6,401 real-world cases from multiple continents, validated against human experts in head-to-head comparison, and accompanied by transparent methodology and code availability (though the full code repository isn’t publicly released, citing patient privacy concerns around the clinical datasets).
The claims are substantiated, the limitations are openly discussed, and the results represent genuine progress. DeepRare demonstrates that AI can surpass expert-level performance in complex medical diagnosis—not in narrow, controlled conditions, but across diverse real-world scenarios spanning 2,919 diseases and 14 medical specialties.
The 95.4% reasoning chain accuracy is particularly compelling. This isn’t a black box making magic predictions; it’s a system that shows its work and gets the citations right 19 out of 20 times.
Caveats: The system was tested on cases that had already been preprocessed to some degree (structured HPO terms, WES data), not raw primary care encounters. Real-world deployment will face messier inputs. And the head-to-head physician comparison, while impressive, was limited to 163 cases from one hospital.
But these are quibbles. DeepRare represents a landmark: the first computational system to demonstrate superior performance over experienced physicians in rare disease diagnosis with transparent, verifiable reasoning. That’s not hype. That’s a breakthrough.
📚 References
- Zhao, W. et al. (2026). An agentic system for rare disease diagnosis with traceable reasoning. Nature. https://doi.org/10.1038/s41586-025-10097-9
- Lassmann, T. (2026). AI succeeds in diagnosing rare diseases. Nature News & Views, 18 February. https://www.nature.com/articles/d41586-026-00290-9
- NHS Genomics Education Programme. (n.d.). The diagnostic odyssey in rare disease. https://www.genomicseducation.hee.nhs.uk/genotes/knowledge-hub/the-diagnostic-odyssey-in-rare-disease/
- EURORDIS. (2024). The diagnosis odyssey of people living with a rare disease. Rare Barometer survey, 13,300 responses from 104 countries. https://www.eurordis.org/publications/rb-diagnosis-odyssey/
- Faye, F. et al. (2024). Time to diagnosis and determinants of diagnostic delays of people living with a rare disease: results of a Rare Barometer retrospective patient survey. European Journal of Human Genetics, 32, 1116–1126. https://doi.org/10.1038/s41431-024-01604-z
- National Rare Disease Resource Hub. (2023). Getting a Rare Disease Diagnosis. https://raredisease.net/diagnosis
- Orphanet Journal of Rare Diseases. (2025). The cost of the diagnostic odyssey of patients with suspected rare diseases. https://doi.org/10.1186/s13023-025-03751-y
- Anthropic. (2024). Introducing the Model Context Protocol. https://www.anthropic.com/news/model-context-protocol
Part of the Future of AI Research Series | Oleh Ivchenko, PhD Candidate | O.S. Popov Odesa National University of Telecommunications