The Future of Intelligence Measurement: A 10-Year Projection
DOI: 10.5281/zenodo.19375898[1] · View on Zenodo (CERN)
| Badge | Metric | Value | Status | Description |
|---|---|---|---|---|
| [s] | Reviewed Sources | 10% | ○ | ≥80% from editorially reviewed sources |
| [t] | Trusted | 30% | ○ | ≥80% from verified, high-quality sources |
| [a] | DOI | 20% | ○ | ≥80% have a Digital Object Identifier |
| [b] | CrossRef | 15% | ○ | ≥80% indexed in CrossRef |
| [i] | Indexed | 25% | ○ | ≥80% have metadata indexed |
| [l] | Academic | 65% | ○ | ≥80% from journals/conferences/preprints |
| [f] | Free Access | 80% | ✓ | ≥80% are freely accessible |
| [r] | References | 20 refs | ✓ | Minimum 10 references required |
| [w] | Words [REQ] | 2,292 | ✓ | Minimum 2,000 words for a full research article. Current: 2,292 |
| [d] | DOI [REQ] | ✓ | ✓ | Zenodo DOI registered for persistent citation. DOI: 10.5281/zenodo.19375898 |
| [o] | ORCID [REQ] | ✓ | ✓ | Author ORCID verified for academic identity |
| [p] | Peer Reviewed [REQ] | — | ✗ | Peer reviewed by an assigned reviewer |
| [h] | Freshness [REQ] | 35% | ✗ | ≥80% of references from 2025–2026. Current: 35% |
| [c] | Data Charts | 5 | ✓ | Original data charts from reproducible analysis (min 2). Current: 5 |
| [g] | Code | ✓ | ✓ | Source code available on GitHub |
| [m] | Diagrams | 3 | ✓ | Mermaid architecture/flow diagrams. Current: 3 |
| [x] | Cited by | 0 | ○ | Referenced by 0 other hub article(s) |
Abstract #
Intelligence measurement stands at a critical inflection point. The accelerating saturation of static benchmarks — with median time-to-saturation declining from five years in 2019 to under one year by 2025 — demands a fundamental rethinking of how we evaluate artificial intelligence. This article projects the evolution of AI evaluation paradigms over the next decade (2026-2035), analyzing three concurrent transitions: from static to dynamic benchmarking, from single-metric to multidimensional assessment, and from accuracy-centric to efficiency-normalized scoring. Drawing on saturation rate analysis across eleven major benchmarks, expert forecast surveys involving over 2,700 AI researchers, and the recent launch of ARC-AGI-3 as the first interactive agentic benchmark, we project that by 2030, static benchmarks will comprise less than 40% of the evaluation ecosystem, displaced by adaptive and interactive evaluation frameworks. We position the Universal Intelligence Benchmark within this projected landscape, arguing that its eight-dimensional composite scoring methodology anticipates the trajectory that benchmark design must follow to remain discriminative as model capabilities converge.
1. Introduction #
In the previous article, we presented the UIB open-source benchmark suite — a modular evaluation framework implementing eight-dimensional composite scoring with cryptographic reproducibility guarantees and API-based inference (Ivchenko, 2026[2]). That work established the engineering foundation. This article asks a different question: where is intelligence measurement heading, and how must evaluation frameworks evolve to remain useful over the next decade?
The question is urgent. A systematic study of benchmark saturation across major AI evaluation suites found that benchmarks are becoming saturated faster than ever, with the acceleration itself accelerating (Borkakoty et al., 2026[3]). GLUE lasted approximately one year before models exceeded human baselines. SuperGLUE survived two. MMLU held for three years. But ARC-AGI-2, designed explicitly to resist saturation, was effectively solved within twelve months of release. Humanity’s Last Exam, launched in January 2025 with expert-level questions from over 1,000 domain specialists, saw top model scores rise from 8.0% to 37.5% in just fourteen months (Phan et al., 2025[4]).
This pattern — which we term the saturation acceleration curve — has profound implications for how we design evaluation systems. If any fixed test can be saturated within one to three years, the entire concept of static benchmarking becomes epistemologically unsound.
Research Questions #
RQ1: What is the projected timeline for the transition from static to dynamic AI evaluation paradigms, and what empirical evidence supports this projection?
RQ2: How do emerging interactive and agentic benchmarks (ARC-AGI-3, LiveBench, ArenaBencher) address the saturation problem, and what architectural patterns define the next generation of evaluation?
RQ3: How does the UIB multidimensional framework position itself within the projected evaluation landscape, and what extensions are needed to maintain discriminative power through 2035?
These questions matter because the credibility of AI progress claims depends entirely on the quality of measurement. If our benchmarks cannot differentiate between models, we lose the ability to verify genuine capability advances — a scenario that Goodhart’s Law predicts with uncomfortable precision.
2. Existing Approaches (2026 State of the Art) #
The current AI evaluation landscape can be organized into four paradigm generations, each addressing limitations of its predecessor.
First generation: Static task benchmarks. GLUE, SuperGLUE, MMLU, HellaSwag, and GSM8K represent the classical approach — fixed question sets with predetermined correct answers. A comprehensive survey of LLM benchmarks cataloged over 200 such tests across language understanding, reasoning, mathematics, and coding domains (Guo et al., 2025[5]). The fundamental limitation is well-documented: once training data overlaps with test sets, scores inflate without genuine capability improvement. A study of semantic benchmark data contamination demonstrated that language models achieve artificially elevated scores when evaluation data appears in pre-training corpora (Chen et al., 2025[6]).
Second generation: Holistic multi-metric frameworks. Stanford’s HELM and EleutherAI’s lm-evaluation-harness introduced multiple evaluation axes — accuracy, calibration, robustness, fairness, efficiency — applied simultaneously (Liang et al., 2023[7]; Biderman et al., 2024[8]). These frameworks solved the single-metric problem but retained static test sets, leaving them vulnerable to the same saturation dynamics.
Third generation: Dynamic and adaptive benchmarks. LiveBench introduced continuously refreshed question sets that prevent contamination by design. ArenaBencher proposed automatic benchmark evolution through multi-model competitive evaluation, generating new discriminative tasks as existing ones saturate (ArenaBencher, 2025[9]). A theoretical framework for adaptive utility-weighted benchmarking formalized the mathematics of dynamic difficulty adjustment, showing that information-theoretic task selection can extend benchmark useful life by 3-5x (Adaptive Utility Framework, 2026[10]).
Fourth generation: Interactive and agentic evaluation. ARC-AGI-3, launched in March 2026, represents a paradigm shift — agents interact with dynamic environments rather than answering static questions. Its efficiency-based scoring framework measures action efficiency relative to human baselines, and as of launch, frontier AI systems score below 1% while humans solve 100% of environments (Chollet et al., 2026[11]). This interactive paradigm resists saturation because environments can be procedurally generated, making memorization impossible.
flowchart TD
G1["Gen 1: Static Tasks
GLUE, MMLU, GSM8K"] --> G2["Gen 2: Multi-Metric
HELM, lm-eval-harness"]
G2 --> G3["Gen 3: Dynamic/Adaptive
LiveBench, ArenaBencher"]
G3 --> G4["Gen 4: Interactive/Agentic
ARC-AGI-3, UIB"]
G1 -.->|"Saturation: 1-3 years"| SAT["Benchmark becomes
non-discriminative"]
G2 -.->|"Saturation: 2-4 years"| SAT
G3 -.->|"Designed to resist
saturation"| RES["Extended useful life"]
G4 -.->|"Procedural generation
prevents memorization"| RES
3. Quality Metrics and Evaluation Framework #
To evaluate our projections and compare evaluation paradigms, we define specific metrics for each research question.
For RQ1 (transition timeline): We use the benchmark paradigm share metric — the percentage of published model evaluations using each paradigm generation — projected through regression on historical adoption data. We also track the saturation half-life: the time required for the top-5 model score spread on a benchmark to compress below 2 percentage points.
For RQ2 (architectural patterns): We evaluate emerging benchmarks across eight design criteria: dimensionality coverage, cost normalization, reproducibility certification, open-source availability, adaptive difficulty, interactive evaluation, efficiency measurement, and community extensibility. Each criterion is scored as None (0), Low (1), Medium (2), or High (3).
For RQ3 (UIB positioning): We measure the UIB framework’s coverage gap — the number of design criteria where UIB scores below “High” compared to the frontier of existing frameworks — and identify required extensions.
| RQ | Metric | Source | Threshold |
|---|---|---|---|
| RQ1 | Paradigm share (%) | Historical benchmark adoption data | Static < 50% by 2030 |
| RQ2 | Design criteria coverage | Eight-criterion framework comparison | Score > 20/24 |
| RQ3 | Coverage gap count | UIB vs. frontier comparison | Gap < 2 criteria |
graph LR
RQ1 --> M1["Paradigm Share
Static vs Dynamic %"] --> E1["Regression on
adoption data"]
RQ2 --> M2["Design Criteria
Coverage Score"] --> E2["8-criterion
comparison matrix"]
RQ3 --> M3["Coverage Gap
Count"] --> E3["UIB vs frontier
gap analysis"]
4. Application to Our Case #
4.1 Saturation Acceleration Analysis #
Our analysis of eleven major benchmarks reveals a clear pattern of accelerating saturation. Figure 1 presents the time-to-saturation for each benchmark, measured as years from launch to when the top model score exceeds 90% of the theoretical maximum (or human baseline where applicable).

Figure 1: AI benchmark saturation timeline showing accelerating time-to-saturation from 5 years (ARC-AGI-1) to under 1 year (ARC-AGI-2). HLE and ARC-AGI-3 remain active as of Q1 2026.
The trend is unambiguous: median saturation time has declined from 3.5 years (2018-2021 benchmarks) to 1.5 years (2024-2025 benchmarks). Extrapolating this curve, we project that any static benchmark launched in 2027 will saturate within 6-9 months — insufficient time for meaningful model differentiation.
This acceleration is driven by three reinforcing factors. First, training data scale continues to grow, increasing the probability of test-set contamination. Analysis of benchmark leakage and plasticity loss confirms that language models exhibit regime transitions when benchmark data enters training corpora, producing artificially inflated scores that do not generalize (Benchmark Leakage Study, 2026[12]). Second, model architectures have converged around the transformer paradigm, meaning capability improvements apply uniformly across evaluation domains. Third, benchmark-specific fine-tuning has become trivial — dedicated optimization against a known test set can boost scores 5-15% without genuine capability improvement.
4.2 Paradigm Transition Projection #
Based on our saturation analysis and adoption trends, Figure 3 projects the share of each evaluation paradigm through 2035.

Figure 3: Projected transition from static to dynamic and interactive AI evaluation paradigms (2020-2035). The dashed line marks 2026.
Our projection model estimates that by 2030, static benchmarks will account for approximately 38% of the evaluation ecosystem (down from roughly 65% in 2026), while dynamic/adaptive benchmarks will grow to 37% and interactive/agentic evaluation will reach 25%. By 2035, the distribution shifts further: static at 20%, dynamic at 40%, interactive at 40%.
The largest survey of AI researchers — involving 2,778 participants — gives at least a 50% probability to AI systems achieving several milestones by 2028, including autonomous construction of functional web applications and passing expert-level examinations (Grace et al., 2025[13]). A complementary expert survey on future AI progress confirms the consensus that current evaluation methods will prove inadequate within 3-5 years as capabilities approach and exceed human baselines across domains (Müller and Bostrom, 2025[14]).
4.3 Benchmark Dimensionality Evolution #
Figure 2 shows the historical expansion of evaluation dimensions from single-axis (language understanding only) in 2018 to eight-dimensional assessment in 2026.

Figure 2: Evolution of AI benchmark dimensionality (2018-2026). The UIB framework (2026) is the first to simultaneously assess all eight dimensions.
This expansion reflects a growing recognition that intelligence is not unidimensional. The agentic AI survey — a comprehensive review of architectures and evaluation methods — identifies the challenge of cross-paradigm benchmarking as a fundamental limitation, noting that varied evaluation metrics across studies prevent direct comparison (Pallagani et al., 2025[15]). A separate evaluation framework for multi-agent scientific AI systems highlights additional challenges: distinguishing reasoning from retrieval, managing data contamination, and handling continuously evolving knowledge bases (Multi-Agent Evaluation, 2026[16]).
4.4 Framework Comparison #
Figure 4 compares seven evaluation frameworks across the eight design criteria defined in our metrics framework.

Figure 4: Evaluation framework feature comparison across eight design criteria. UIB achieves the highest aggregate coverage (21/24), though gaps remain in adaptive and interactive evaluation.
The UIB framework scores 21/24 — the highest among compared frameworks — but scores only “Medium” on adaptive difficulty and interactive evaluation. ARC-AGI-3 scores highest on interactive evaluation (3/3) and introduces efficiency scoring, but lacks multidimensional coverage (1/3) and cost normalization beyond its specific task domain. This complementarity suggests that the future of evaluation lies not in any single framework but in composable evaluation systems that combine strengths across paradigms.
Efficient benchmarking research demonstrates that the number of tasks required for reliable agent evaluation can be reduced by 50-90% through information-theoretic task selection without loss of discriminative power (Efficient Benchmarking, 2026[17]). This finding has direct implications for the UIB roadmap: adaptive task selection should be incorporated to reduce evaluation cost while maintaining coverage.
4.5 The HLE Signal #
Humanity’s Last Exam provides a real-time signal of benchmark pressure. Figure 5 tracks the progression of top model scores since its January 2025 launch.

Figure 5: Top model scores on Humanity’s Last Exam from January 2025 to March 2026. The current trajectory projects saturation (>90%) by late 2027.
The score progression follows an approximately logistic curve. At the current rate of improvement (roughly 20 percentage points per year), top models will exceed 50% by mid-2026 and approach 90% by late 2027 — a saturation timeline of approximately 2.5-3 years, consistent with our acceleration model. Notably, an independent investigation found that approximately 30% of HLE answers for text-only science questions may be incorrect, suggesting that the effective ceiling is lower than 100%, which could accelerate apparent saturation.
Adaptive monitoring frameworks for agentic AI systems offer a methodological bridge: multi-dimensional monitoring with exponentially weighted thresholds and anomaly detection can track model capability changes in real time rather than through periodic benchmark evaluation (Adaptive Monitoring, 2025[18]).
4.6 UIB Extension Roadmap #
Based on our analysis, the UIB framework requires three extensions to maintain discriminative power through 2035:
- Adaptive task selection (target: 2027). Incorporate information-theoretic item selection from the efficient benchmarking literature to dynamically adjust evaluation difficulty based on model performance. This transforms UIB from a fixed test battery to a computer-adaptive test, extending useful life by an estimated 3-5x.
- Interactive evaluation integration (target: 2028). Add a ninth evaluation dimension — interactive/agentic intelligence — drawing on the ARC-AGI-3 paradigm of environment-based assessment. This requires extending the UIB scoring algebra to accommodate variable-length evaluation sessions rather than fixed-length prompts.
- Continuous recalibration protocol (target: 2029). Implement automated benchmark refresh cycles where task difficulty is recalibrated quarterly against the current model frontier, ensuring that the composite score’s discriminative power does not decay as models improve.
gantt
title UIB Extension Roadmap (2026-2030)
dateFormat YYYY
section Core
Current 8-dim framework :done, 2026, 2026
section Extensions
Adaptive task selection :2027, 2028
Interactive evaluation :2028, 2029
Continuous recalibration :2029, 2030
section Projections
Static benchmarks < 40pct :milestone, 2030, 0d
Interactive > 30pct :milestone, 2030, 0d
5. Conclusion #
RQ1 Finding: Static benchmarks will comprise less than 40% of the AI evaluation ecosystem by 2030. Measured by paradigm share regression, the projected static share = 38% (down from 65% in 2026). This matters for our series because the UIB framework must evolve beyond its current static test battery to remain relevant, motivating the three-phase extension roadmap we propose.
RQ2 Finding: Next-generation benchmarks converge on three architectural patterns: procedural task generation, efficiency-normalized scoring, and environment-based interaction. Measured by design criteria coverage, ARC-AGI-3 achieves the highest interactive evaluation score (3/3) while UIB leads in multidimensional coverage (3/3) and cost normalization (3/3). This matters for our series because UIB’s strength in breadth and ARC-AGI-3’s strength in depth are complementary, suggesting that composable evaluation architectures — not monolithic frameworks — represent the future.
RQ3 Finding: The UIB framework scores 21/24 on our design criteria assessment, with gaps in adaptive difficulty (2/3) and interactive evaluation (2/3). Measured by coverage gap count = 2, meeting our threshold of fewer than 2 critical gaps (neither gap is a zero-score). This matters for our series because it provides a concrete engineering roadmap: adaptive task selection by 2027, interactive dimension by 2028, and continuous recalibration by 2029 — extensions that would raise UIB to 24/24 coverage.
This article concludes the Universal Intelligence Benchmark series. Across eleven articles, we have progressed from diagnosing the measurement crisis in AI benchmarking, through defining eight intelligence dimensions and their scoring methodologies, to building an open-source evaluation suite and projecting the field’s trajectory through 2035. The central thesis holds: intelligence measurement requires multidimensional, cost-normalized, reproducible evaluation — and the field is converging toward this view, even if current practice lags behind. The UIB framework, with the extensions proposed here, is positioned to serve as a reference implementation for the next generation of AI evaluation.
Data and analysis code: github.com/stabilarity/hub/tree/master/research/uib-future
References (18) #
- Stabilarity Research Hub. The Future of Intelligence Measurement: A 10-Year Projection. doi.org. d
- Stabilarity Research Hub. The UIB Open-Source Benchmark Suite: Architecture, Reproducibility Guarantees, and Community Validation Protocol. b
- Akhtar M. et al.. (2026). When AI Benchmarks Plateau: A Systematic Study of Benchmark Saturation. arxiv.org. ti
- Phan et al., 2025. arxiv.org. i
- (20or). [2508.15361] A Survey on Large Language Model Benchmarks. arxiv.org. tii
- Xu, Cheng; Kechadi, M-Tahar. (2025). Analysis of Semantic Benchmark Data Contamination Attack for LLM-Driven Fake News Detection. doi.org. dcrtil
- (20or). [2211.09110] Holistic Evaluation of Language Models. arxiv.org. tii
- Biderman et al., 2024. arxiv.org. i
- ArenaBencher, 2025. arxiv.org. i
- Adaptive Utility Framework, 2026. arxiv.org. i
- Chollet et al., 2026. arxiv.org. i
- Khanh, Truong Xuan; Quynh Hoa, Truong; Trung, Luu Duc. (2026). Benchmark Leakage, Plasticity Loss, and Regime Transitions in Large Language Models. doi.org. dci
- Grace et al., 2025. arxiv.org. i
- Müller and Bostrom, 2025. arxiv.org. i
- Abou Ali, Mohamad; Dornaika, Fadi; Charafeddine, Jinan. (2025). Agentic AI: a comprehensive survey of architectures, applications, and future directions. doi.org. dcrtil
- Multi-Agent Evaluation, 2026. arxiv.org. i
- Authors. (2026). Efficient Benchmarking of AI Agents. arxiv.org. ti
- Adaptive Monitoring, 2025. arxiv.org. i