Skip to content

Stabilarity Hub

Menu
  • Home
  • Research
    • Healthcare & Life Sciences
      • Medical ML Diagnosis
    • Enterprise & Economics
      • AI Economics
      • Cost-Effective AI
      • Spec-Driven AI
    • Geopolitics & Strategy
      • Anticipatory Intelligence
      • Future of AI
      • Geopolitical Risk Intelligence
    • AI & Future Signals
      • Capability–Adoption Gap
      • AI Observability
      • AI Intelligence Architecture
    • Data Science & Methods
      • HPF-P Framework
      • Intellectual Data Analysis
    • Publications
      • External Publications
    • Robotics & Engineering
      • Open Humanoid
    • Benchmarks & Measurement
      • Universal Intelligence Benchmark
      • Shadow Economy Dynamics
  • Tools
    • Healthcare & Life Sciences
      • ScanLab
      • AI Data Readiness Assessment
    • Enterprise Strategy
      • AI Use Case Classifier
      • ROI Calculator
      • Risk Calculator
    • Portfolio & Analytics
      • HPF Portfolio Optimizer
      • Adoption Gap Monitor
      • Data Mining Method Selector
    • Geopolitics & Prediction
      • War Prediction Model
      • Ukraine Crisis Prediction
      • Gap Analyzer
    • Technical & Observability
      • OTel AI Inspector
    • Robotics & Engineering
      • Humanoid Simulation
    • Benchmarks
      • UIB Benchmark Tool
  • API Gateway
  • About
  • Contact
  • Join Community
  • Terms of Service
  • Geopolitical Stability Dashboard
Menu

The Coverage Gap: What AI Can Do vs. What We Actually Use It For

Posted on March 8, 2026March 9, 2026 by
AI EconomicsAcademic Research · Article 40 of 49
By Oleh Ivchenko  · Analysis reflects publicly available data and independent research. Not investment advice.

The Coverage Gap: What AI Can Do vs. What We Actually Use It For

OPEN ACCESS CERN Zenodo · Open Preprint Repository CC BY 4.0
📚 Academic Citation: Ivchenko, Oleh (2026). The Coverage Gap: What AI Can Do vs. What We Actually Use It For. Research article: The Coverage Gap: What AI Can Do vs. What We Actually Use It For. Odessa National Polytechnic University, Department of Economic Cybernetics.
DOI: 10.5281/zenodo.18911661  ·  View on Zenodo (CERN)

Anthropic published something rare this week: a paper that uses actual usage data instead of speculation. Most labor displacement research asks “what tasks could AI theoretically do?” and then declares a crisis. Massenkoff and McCrory asked a different question: “what tasks are people actually using it for?” The gap between those two answers is the most important number in AI economics right now — and the paper stops just short of explaining what it means.

What the Paper Says

Massenkoff and McCrory introduce a new metric they call observed exposure — a measure of how AI is actually being used in work contexts, derived from real Claude usage data weighted by autonomy (fully automated uses count more than human-assisted ones) and work-relatedness. This is contrasted with theoretical exposure, the standard approach in prior literature: task-level capability assessments of what AI could do. The key findings are direct. Computer and Math occupations show 94% theoretical exposure but only 33% observed coverage. Office and Administrative roles: 90% theoretical, 25% observed. Business and Financial: 85% theoretical, 20% observed. Legal: 80% theoretical, 15% observed. Healthcare Support: 40% theoretical, 5% observed. Construction: 15% theoretical, 2% observed. On employment effects: no systematic increase in unemployment for highly exposed workers since late 2022. Suggestive evidence that hiring of younger workers has slowed in exposed occupations, but the signal is weak. The paper’s conclusion is measured: “the track record of past approaches gives reason for humility.”

xychart-beta
    title "Theoretical vs Observed AI Coverage by Occupation (March 2026)"
    x-axis ["Computer & Math", "Office & Admin", "Business & Finance", "Legal", "Healthcare", "Construction"]
    y-axis "Coverage %" 0 --> 100
    bar [94, 90, 85, 80, 40, 15]
    line [33, 25, 20, 15, 5, 2]

Where the Community Is Right

The consensus reaction — that this paper represents a methodological improvement over prior O*NET-based theoretical exposure studies — is correct. The Acemoglu-style task-capability framework that has dominated AI labor economics for the past three years treats AI capability as binary: either AI can do a task or it cannot. Massenkoff and McCrory are right that this misses the adoption curve entirely. The Denmark study (Humlum, 2025) found no wage or employment effect from generative AI tools, which confused researchers using theoretical exposure models. Observed exposure explains why: theoretical capability overstates actual penetration by 3–5x depending on sector. The BLS projection finding — that higher observed exposure correlates with lower projected job growth through 2034 — is also the right signal to track. Observed exposure is a leading indicator in a way theoretical exposure never was.

Where I Think They Stop Too Early

My reading of this paper is that the coverage gap — the chasm between 94% theoretical and 33% observed in Computer & Math — is treated as a curiosity rather than the central economic phenomenon that requires explanation. Massenkoff and McCrory measure it precisely and then move on. That gap is the whole story. There are three possible explanations for why observed coverage is 3–5x below theoretical capability, and each has radically different economic implications. The paper does not distinguish between them. Explanation 1: Adoption lag. Organizations know AI can do these tasks but haven’t deployed yet due to procurement cycles, integration costs, or risk aversion. If this is the dominant explanation, the gap closes fast — and the unemployment signal appears in 18–36 months. Explanation 2: Quality threshold mismatch. AI can theoretically do the task but the output quality isn’t yet at the bar required for deployment without heavy human review. The marginal cost of review erases the economic benefit. This explains why healthcare shows 40% theoretical and 5% observed — medical-grade reliability is a different bar than “good enough.” Explanation 3: Organizational friction. The task is technically automatable and quality is sufficient, but workflow integration, legal liability, or institutional inertia prevents deployment. This is a structural gap that technology alone cannot close. The paper treats all three as equivalent. They are not. If Explanation 1 dominates, labor economists have perhaps 2–3 years before the signal becomes unmistakable. If Explanation 3 dominates, we may have a decade. The policy responses are entirely different. Observability into why the gap exists matters more than the gap’s magnitude.

My Assumptions

I want to be explicit about what I’m assuming when I read this data:

  1. The coverage gap is not uniform across the 3–5x range — different sectors are dominated by different explanations. Healthcare is Explanation 2 (quality threshold); Legal is probably Explanation 3 (liability); Computer & Math is likely Explanation 1 (adoption lag closing fast).
  2. The “no unemployment effect” finding is a lag artifact, not evidence of AI’s economic harmlessness. The Denmark study found the same thing 18 months after widespread adoption. Employment effects in knowledge work typically surface 24–48 months after capability deployment.
  3. The younger-worker hiring slowdown the paper identifies is the most important signal in the entire paper — more important than the unemployment data — because it indicates structural demand reduction for entry-level cognitive labor, which precedes displacement in older cohorts by several years.

I may be wrong about assumption 2. It is possible that AI augmentation increases knowledge worker productivity enough to expand the total market, absorbing displaced workers into new roles. I consider this unlikely at the pace of current capability growth, but it is a defensible position.

The Missing Focus: Observability of the Gap Itself

Here is what the paper doesn’t address and what I think the next paper needs to do: instrument the coverage gap in real time. Massenkoff and McCrory have built the measurement framework. What they have not built is a monitoring system that tracks which occupations’ observed coverage is accelerating toward their theoretical ceiling — and at what rate. Computer & Math is at 33% of a 94% ceiling. If the observed coverage rate is growing at 5 percentage points per quarter, the gap closes in roughly 4 years. If it’s growing at 1 point per quarter, we have 15 years. Those are radically different labor market situations requiring radically different policy responses. This is an observability problem. The Anthropic Economic Index has the data infrastructure to answer this question month by month. The paper treats the March 2026 snapshot as a baseline. What enterprise risk managers, policymakers, and workers actually need is the velocity of gap closure, not the gap’s current size.

flowchart TD
    subgraph "What the Paper Measures"
        A1[Theoretical Exposure per Occupation]
        A2[Observed Exposure — actual Claude usage]
        A3[Coverage Gap = A1 minus A2]
        A4[Employment correlation with exposure]
    end
    subgraph "What's Missing"
        B1[Rate of gap closure per quarter]
        B2[Which explanation dominates per sector]
        B3[Younger-worker hiring velocity — leading indicator]
        B4[Sector-level observability dashboard]
    end
    A3 -->|"measured once"| C[March 2026 Snapshot]
    B1 -->|"needed continuously"| D[Early Warning System]
    style B1 fill:#fff,stroke:#111,stroke-width:2px
    style B2 fill:#fff,stroke:#111,stroke-width:2px
    style B3 fill:#fff,stroke:#111,stroke-width:2px
    style B4 fill:#fff,stroke:#111,stroke-width:2px

The XAI angle is directly relevant here. If we cannot explain why AI is being used for 33% of Computer & Math tasks rather than 94%, we cannot predict when that changes. Interpretability of usage patterns — which task types attract autonomous vs. augmentative use, and why — is not just a safety question. It is the key input to any credible labor market forecast.

Practical Implications

If you are a knowledge worker in Computer & Math, Office & Admin, or Business & Finance, the question this paper raises is not “will I be displaced?” The question is “which of the three explanations applies to my specific role, and how fast is the coverage gap in my sector closing?” If you are building enterprise AI systems, the coverage gap is your market map. The sectors with high theoretical exposure and low observed coverage are where adoption friction is highest — and where observability, integration tooling, and quality guarantees are the actual product. Healthcare at 5% observed against 40% theoretical is not a market where “build a smarter model” wins. It is a market where “build a clinically auditable, liability-clear deployment pipeline” wins. If you are a policymaker, the younger-worker hiring slowdown is the signal to watch. It is the canary that precedes structural displacement by 2–5 years. Acting after unemployment rises is acting after the damage is done.

Closing

Massenkoff and McCrory have built the right instrument. The coverage gap between theoretical and observed AI exposure is real, measurable, and sector-specific. What they have not yet answered — and what the next paper needs to address — is the velocity at which that gap is closing, sector by sector, and why. That is not a criticism of this paper. It is the natural next question. The most important number in AI labor economics is not the gap’s current size. It is the rate at which it is shrinking.

quadrantChart
    title Coverage Gap Explanation Framework by Sector (2026)
    x-axis "Low Organizational Friction" --> "High Organizational Friction"
    y-axis "Low Quality Gap" --> "High Quality Gap"
    quadrant-1 Explanation 3 dominates
    quadrant-2 Mixed Structural + Quality
    quadrant-3 Explanation 1 dominates Adoption Lag
    quadrant-4 Explanation 2 dominates Quality Threshold
    Computer and Math: [0.2, 0.2]
    Office and Admin: [0.4, 0.3]
    Business and Finance: [0.5, 0.4]
    Legal: [0.7, 0.5]
    Healthcare Support: [0.5, 0.85]
    Construction: [0.8, 0.6]

References:

  • Massenkoff, M. & McCrory, P. (2026). Labor market impacts of AI: A new measure and early evidence. Anthropic Economic Index. https://www.anthropic.com/research/labor-market-impacts
  • Bureau of Labor Statistics (2024). Occupational Outlook Handbook, 2024–2034 Projections. https://www.bls.gov/ooh/
  • Humlum, A. (2025). The Labor Market Impact of AI Tools. University of Chicago / NBER. https://doi.org/10.2139/ssrn.4853799
  • Acemoglu, D. & Restrepo, P. (2022). Tasks, Automation, and the Rise in U.S. Wage Inequality. Econometrica 90(5). https://doi.org/10.3982/ECTA19815
  • Autor, D., Levy, F. & Murnane, R. (2003). The Skill Content of Recent Technological Change: An Empirical Exploration. Quarterly Journal of Economics 118(4). https://doi.org/10.1162/003355303322552801
  • Amodei, D. (2026). On AI and the future of work. Anthropic Blog. https://www.anthropic.com
← Previous
Agentic OS Economics: Why the Platform That Wins Won't Be the Smartest One
Next →
Agent Economy Investment Surge: VC Bets on Agentic Infrastructure
All AI Economics articles (49)40 / 49
Version History · 3 revisions
+
RevDateStatusActionBySize
v1Mar 8, 2026DRAFTInitial draft
First version created
(w) Author11,610 (+11610)
v2Mar 8, 2026PUBLISHEDPublished
Article published to research hub
(w) Author11,611 (~0)
v3Mar 9, 2026CURRENTMinor edit
Formatting, typos, or styling corrections
(r) Redactor11,617 (+6)

Versioning is automatic. Each revision reflects editorial updates, reference validation, or formatting changes.

Recent Posts

  • The Computer & Math 33%: Why the Most AI-Capable Occupation Group Still Automates Only a Third of Its Tasks
  • Frontier AI Consolidation Economics: Why the Big Get Bigger
  • Silicon War Economics: The Cost Structure of Chip Nationalism
  • Enterprise AI Agents as the New Insider Threat: A Cost-Effectiveness Analysis of Autonomous Risk
  • Policy Implications and a Decision Framework for Shadow Economy Reduction in Ukraine

Recent Comments

  1. Oleh on Google Antigravity: Redefining AI-Assisted Software Development

Archives

  • March 2026
  • February 2026

Categories

  • ai
  • AI Economics
  • AI Observability & Monitoring
  • AI Portfolio Optimisation
  • Ancient IT History
  • Anticipatory Intelligence
  • Capability-Adoption Gap
  • Cost-Effective Enterprise AI
  • Future of AI
  • Geopolitical Risk Intelligence
  • hackathon
  • healthcare
  • HPF-P Framework
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Open Humanoid
  • Research
  • Shadow Economy Dynamics
  • Spec-Driven AI Development
  • Technology
  • Uncategorized
  • Universal Intelligence Benchmark
  • War Prediction

About

Stabilarity Research Hub is dedicated to advancing the frontiers of AI, from Medical ML to Anticipatory Intelligence. Our mission is to build robust and efficient AI systems for a safer future.

Language

  • Medical ML Diagnosis
  • AI Economics
  • Cost-Effective AI
  • Anticipatory Intelligence
  • Data Mining
  • 🔑 API for Researchers

Connect

Facebook Group: Join

Telegram: @Y0man

Email: contact@stabilarity.com

© 2026 Stabilarity Research Hub

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme
Stabilarity Research Hub

Open research platform for AI, machine learning, and enterprise technology. All articles are preprints with DOI registration via Zenodo.

185+
Articles
8
Series
DOI
Archived

Research Series

  • Medical ML Diagnosis
  • Anticipatory Intelligence
  • Intellectual Data Analysis
  • AI Economics
  • Cost-Effective AI
  • Spec-Driven AI

Community

  • Join Community
  • MedAI Hack
  • Zenodo Archive
  • Contact Us

Legal

  • Terms of Service
  • About Us
  • Contact
Operated by
Stabilarity OÜ
Registry: 17150040
Estonian Business Register →
© 2026 Stabilarity OÜ. Content licensed under CC BY 4.0
Terms About Contact
Language: 🇬🇧 EN 🇺🇦 UK 🇩🇪 DE 🇵🇱 PL 🇫🇷 FR
Display Settings
Theme
Light
Dark
Auto
Width
Default
Column
Wide
Text 100%

We use cookies to enhance your experience and analyze site traffic. By clicking "Accept All", you consent to our use of cookies. Read our Terms of Service for more information.