Skip to content

Stabilarity Hub

Menu
  • Home
  • Research
    • Healthcare & Life Sciences
      • Medical ML Diagnosis
    • Enterprise & Economics
      • AI Economics
      • Cost-Effective AI
      • Spec-Driven AI
    • Geopolitics & Strategy
      • Anticipatory Intelligence
      • Future of AI
      • Geopolitical Risk Intelligence
    • AI & Future Signals
      • Capability–Adoption Gap
      • AI Observability
      • AI Intelligence Architecture
    • Data Science & Methods
      • HPF-P Framework
      • Intellectual Data Analysis
    • Publications
      • External Publications
    • Robotics & Engineering
      • Open Humanoid
    • Benchmarks & Measurement
      • Universal Intelligence Benchmark
      • Shadow Economy Dynamics
  • Tools
    • Healthcare & Life Sciences
      • ScanLab
      • AI Data Readiness Assessment
    • Enterprise Strategy
      • AI Use Case Classifier
      • ROI Calculator
      • Risk Calculator
    • Portfolio & Analytics
      • HPF Portfolio Optimizer
      • Adoption Gap Monitor
      • Data Mining Method Selector
    • Geopolitics & Prediction
      • War Prediction Model
      • Ukraine Crisis Prediction
      • Gap Analyzer
    • Technical & Observability
      • OTel AI Inspector
    • Robotics & Engineering
      • Humanoid Simulation
    • Benchmarks
      • UIB Benchmark Tool
  • API Gateway
  • About
  • Contact
  • Join Community
  • Terms of Service
  • Geopolitical Stability Dashboard
Menu

The Computer & Math 33%: Why the Most AI-Capable Occupation Group Still Automates Only a Third of Its Tasks

Posted on March 15, 2026 by
AI EconomicsAcademic Research · Article 49 of 49
By Oleh Ivchenko  · Analysis reflects publicly available data and independent research. Not investment advice.

The Computer & Math 33%: Why the Most AI-Capable Occupation Group Still Automates Only a Third of Its Tasks

OPEN ACCESS CERN Zenodo · Open Preprint Repository CC BY 4.0
📚 Academic Citation: Ivchenko, Oleh (2026). The Computer & Math 33%: Why the Most AI-Capable Occupation Group Still Automates Only a Third of Its Tasks. Research article: The Computer & Math 33%: Why the Most AI-Capable Occupation Group Still Automates Only a Third of Its Tasks. Odessa National Polytechnic University, Department of Economic Cybernetics.
DOI: 10.5281/zenodo.19040207  ·  View on Zenodo (CERN)

cover: https://images.unsplash.com/photo-1555949963-aa79dcee981c?w=1600&q=80

Abstract

The Anthropic Economic Index (Massenkoff & McCrory, 2026) identifies computer and mathematical occupations as theoretically the most AI-exposed occupation group in the U.S. economy, with 94% of tasks rated as feasible for LLM acceleration. Yet observed automation covers only 33% of those tasks — producing a 61-percentage-point capability-adoption gap that is the largest absolute gap of any occupation group. This article, the fourth in the Capability-Adoption Gap Mini-Series, investigates the structural, organizational, and economic forces that prevent the occupation group most fluent in AI from fully deploying it. Drawing on Federal Reserve survey data, BLS 2024–2034 projections, SHRM displacement research, and McKinsey skill-demand analysis, we argue that the 33% figure reflects not technological immaturity but a rational equilibrium shaped by verification costs, liability asymmetry, toolchain fragmentation, and the irreducibility of architectural judgment. The gap is not a failure of adoption — it is the market price of trust in non-deterministic systems.

flowchart TD
    A["Theoretical AI Exposure
94% of Tasks"] --> B["Observed Automation
33% of Tasks"]
    A --> C["Capability-Adoption Gap
61 Percentage Points"]
    C --> D["Verification Costs"]
    C --> E["Liability Asymmetry"]
    C --> F["Toolchain Fragmentation"]
    C --> G["Architectural Irreducibility"]
    D --> H["Rational Equilibrium:
33% Coverage"]
    E --> H
    F --> H
    G --> H

The 33% Anomaly: Setting the Scene

When Anthropic published its Economic Index and accompanying labor-market paper in January 2026, the headline finding for computer and mathematical occupations was superficially reassuring: the occupation group most capable of using AI was also the group using it most. Claude covered 33% of all tasks in the category — the highest observed coverage of any occupation group, exceeding management (approximately 28%), business and financial operations (approximately 22%), and education (approximately 18%) (Massenkoff & McCrory, 2026). But the same data revealed a deeper puzzle. The theoretical exposure score for computer and mathematical occupations, as estimated by Eloundou et al. (2023) and validated by Anthropic’s own task-matching methodology, stood at 94%. No other occupation group had both such high theoretical potential and such a large absolute gap between what AI could do and what it actually does in professional settings. The Federal Reserve corroborated the adoption picture from a different angle. In their February 2025 FEDS Notes analysis, Board of Governors economists found that generative AI adoption at work was highest for computer and mathematical occupations at 49.6%, compared to 49.0% for management and 32.7% for business and financial operations (Federal Reserve Board, 2025). The St. Louis Fed’s Rapid Adoption survey measured that workers in computer and mathematical occupations spent between 9 and 12 percent of their weekly work hours using generative AI tools — the highest of any occupation group (Bick et al., 2025). The paradox is clear: the occupation group that uses AI the most, the most often, and for the most hours per week still automates only a third of its feasible task space. This article investigates why.

From the Series: The Gap as a Structural Phenomenon

This is the fourth article in the Capability-Adoption Gap Mini-Series. The series examines why the distance between theoretical AI capability and observed AI usage varies so dramatically across occupation groups — and what those variations reveal about the true economics of AI deployment. Computer and mathematical occupations present a fundamentally different case. Unlike healthcare, where regulatory barriers are external and institutional, and unlike law, where liability is personal and professional, the barriers in computer and math occupations are internal to the work itself. The gap exists not because someone prevents adoption, but because the nature of the remaining 61% of tasks resists the current modality of AI assistance.

What the 33% Actually Covers

Understanding the gap requires understanding what falls inside the 33%. The Anthropic Economic Index methodology maps Claude usage against O\*NET task descriptions, weighting by whether usage is augmentative (half weight) or automated (full weight), and whether usage occurs in work-related contexts (Massenkoff & McCrory, 2026). For computer and mathematical occupations, the tasks that fall inside the 33% observed coverage cluster around several well-defined categories: Code generation and completion. GitHub Copilot has achieved 90% adoption among Fortune 100 companies, with a 46% code completion rate and approximately 30% suggestion acceptance rate (GitHub & Accenture, 2026). Approximately 92% of developers now use AI tools in some part of their workflow (index.dev, 2026). This represents the highest-density zone of AI coverage in any occupation. Documentation and technical writing. Summarizing codebases, generating API documentation, drafting README files, and creating technical specifications are tasks where LLMs perform reliably and where verification costs are low. Data transformation and analysis scripting. Writing SQL queries, data cleaning scripts, statistical analysis code, and visualization generation — tasks with clear input-output specifications and verifiable results. Debugging assistance and error interpretation. Interpreting stack traces, suggesting fixes for common patterns, and providing explanations for error messages are high-frequency, low-risk augmentation tasks. Testing and quality assurance scaffolding. Generating unit tests, writing test fixtures, and producing boilerplate test infrastructure.

pie title Distribution of AI-Covered Tasks in Computer & Math (Estimated)
    "Code Generation & Completion" : 35
    "Documentation & Technical Writing" : 20
    "Data Transformation & Analysis" : 18
    "Debugging & Error Interpretation" : 15
    "Testing & QA Scaffolding" : 12

These tasks share common properties: they have verifiable outputs (code either compiles or doesn’t, tests either pass or don’t), low liability for errors (a wrong code suggestion is caught in review), and established feedback loops (CI/CD pipelines, code review, type checkers). In the language of decision theory, they are tasks where the cost of verification is substantially lower than the cost of production.

What the Remaining 61% Looks Like

The uncovered 61% — the tasks that AI could theoretically accelerate but does not in practice — reveals the true architecture of the gap. These tasks fall into several structural categories:

Architectural Decision-Making

Software architecture involves selecting among design patterns, evaluating trade-offs between competing non-functional requirements (scalability, maintainability, security, cost), and making decisions whose consequences unfold over months or years. These decisions are:

  • Context-dependent in ways that exceed current context windows. A system’s architecture depends on team capabilities, organizational structure, regulatory environment, existing technical debt, vendor relationships, and budget constraints — information that rarely exists in any single document.
  • Consequence-laden across time horizons that make verification impossible at decision time. An architectural choice that seems optimal today may create catastrophic technical debt in eighteen months.
  • Politically embedded. Architecture decisions in enterprises are negotiated outcomes that balance technical merit with organizational power dynamics, budget allocations, and career incentives.

No LLM currently takes responsibility for architectural decisions, and no rational engineering leader delegates them. The 84% of corporate Copilot users who “wouldn’t go back to working without it” are not using it for architecture — they are using it for the verified-output tasks described above (GitHub, 2026).

System Integration and Legacy Interaction

Enterprise software engineering involves connecting systems that were designed independently, often decades apart, with incompatible data models, authentication mechanisms, and failure modes. Integration tasks require:

  • Understanding undocumented system behavior (not just documented APIs)
  • Navigating organizational boundaries between teams that own different systems
  • Managing state across distributed transactions with partial failure modes
  • Operating within compliance frameworks that constrain implementation choices

These tasks are theoretically feasible for LLMs — Eloundou et al. rate them at β=0.5 or β=1.0 — but they require environmental access, institutional knowledge, and consequence management that current tool-augmented LLMs do not possess.

Security-Critical Implementation

SHRM’s October 2025 research found that 19.2% of computer and mathematical jobs utilize AI tools for at least half of their tasks — but this adoption rate drops precipitously for security-sensitive work (SHRM, 2025). Cryptographic implementation, access control logic, input validation, and security audit work remain overwhelmingly manual because:

  • The cost of an AI-introduced vulnerability is asymmetric (one mistake can compromise an entire system)
  • Security code requires adversarial thinking that current LLMs handle poorly
  • Compliance frameworks (SOC 2, ISO 27001, PCI DSS) require auditable human decision chains

Mathematical and Algorithmic Research

The “mathematical” half of “computer and mathematical” occupations includes statisticians, actuaries, operations research analysts, and data scientists performing novel analysis. These roles involve:

  • Formulating new mathematical models (not just applying existing ones)
  • Evaluating whether statistical assumptions hold for specific data
  • Interpreting results in domain context
  • Communicating uncertainty to non-technical stakeholders

While LLMs excel at applying known statistical techniques, they struggle with the creative and evaluative aspects of mathematical research — precisely the aspects that define the occupation.

The Economics of the Gap: Five Forces

The 61-percentage-point gap is not random. It is maintained by five economic forces that create a stable equilibrium:

Force 1: Verification Cost Asymmetry

For the tasks inside the 33%, verification is cheap. Code compiles or it doesn’t. Tests pass or they don’t. Documentation is readable or it isn’t. For the tasks outside the 33%, verification is expensive — often approaching or exceeding the cost of doing the task manually. Reviewing an AI-generated architectural decision requires the same expertise as making one. Auditing AI-generated security code requires more effort than writing it, because the reviewer must also consider adversarial edge cases the AI might have missed. This creates a natural boundary: AI adoption expands until the marginal verification cost equals the marginal production cost savings. At 33%, the computer and mathematical occupation group has reached that boundary for current tool capabilities.

Force 2: Liability Without Insurance

Unlike healthcare (malpractice insurance) and law (professional liability insurance), software engineering has no established framework for insuring against AI-generated defects. When a GitHub Copilot suggestion introduces a vulnerability, the liability falls on the developer who accepted it, the team that reviewed it, and the organization that deployed it. No AI vendor currently accepts consequential liability for generated code (GitHub Copilot Terms of Service, 2026). This liability gap creates rational caution. Organizations adopt AI for tasks where errors are easily caught and cheaply fixed, and avoid it for tasks where errors propagate silently and expensively.

Force 3: Toolchain Fragmentation

The 92% developer adoption rate for AI tools masks enormous fragmentation. Developers use GitHub Copilot for code completion, ChatGPT for explanation, Claude for analysis, specialized tools for testing — but no single tool covers the full software development lifecycle. Each tool has different context windows, different training data cutoffs, different strengths, and different failure modes. This fragmentation means that even when a tool could theoretically cover a task, the workflow switching cost makes manual execution more efficient for complex, multi-step tasks. McKinsey’s finding that three-quarters of AI skill demand is concentrated in computer and mathematical roles, management, and business and financial operations (McKinsey, 2026) reflects not just demand for AI users but demand for professionals who can navigate this fragmented toolchain.

Force 4: The Augmentation Trap

Anthropic’s methodology weights automated usage at full value and augmentative usage at half value. For computer and mathematical occupations, the overwhelming majority of AI usage is augmentative — developers use AI suggestions as starting points, then modify, extend, and integrate them. The 30% suggestion acceptance rate for GitHub Copilot means that 70% of AI output in the most AI-adopted tool is rejected or substantially modified (index.dev, 2026). This augmentation pattern creates a ceiling: as long as AI output requires human judgment to evaluate and integrate, the observed exposure (which weights augmentation at 0.5×) will remain substantially below the theoretical exposure (which counts any feasibility as 1.0).

Force 5: The Expertise Paradox

The BLS projects computer and mathematical occupations to grow at 10.1% from 2024 to 2034 — the second-fastest of any occupation group and more than three times the economy-wide average of 3.1% (BLS, 2026). This growth occurs despite — or perhaps because of — high AI exposure. The paradox resolves when we recognize that AI adoption in this occupation group increases demand for the expertise needed to direct, verify, and integrate AI output. Information security analysts, projected to grow 28.5% over the decade, exemplify this dynamic: the more AI generates code, the more security expertise is needed to audit it (BLS, 2026).

flowchart LR
    subgraph "Five Forces Maintaining the 61% Gap"
        V["Verification Cost
Asymmetry"] 
        L["Liability Without
Insurance"]
        T["Toolchain
Fragmentation"]
        A["Augmentation
Trap"]
        E["Expertise
Paradox"]
    end
    V --> EQ["Stable Equilibrium
at ~33%"]
    L --> EQ
    T --> EQ
    A --> EQ
    E --> EQ
    EQ --> O1["High Adoption
Low Coverage"]
    EQ --> O2["Employment Growth
Despite Exposure"]
    EQ --> O3["Demand Shift
Not Displacement"]

Comparative Gap Analysis Across the Mini-Series

The four occupation groups examined in this mini-series reveal fundamentally different gap structures: | Occupation Group | Theoretical Exposure | Observed Coverage | Gap | Primary Barrier | |—|—|—|—|—| | Healthcare (Article 2) | ~35% | ~5% | ~30 pp | Regulatory moats (FDA, clinical liability) | | Legal (Article 3) | ~65% | ~15% | ~50 pp | Liability transfer failure | | Computer & Math (Article 4) | 94% | 33% | 61 pp | Verification costs + architectural irreducibility | | All Occupations (Baseline) | ~56% | ~14% | ~42 pp | Composite of all barriers | The computer and mathematical group has both the highest absolute gap (61 points) and the highest adoption (33% coverage). This combination suggests that the gap is not primarily caused by organizational resistance or regulatory friction — the barriers that dominate in healthcare and law — but by the intrinsic difficulty of verifying AI output on complex, consequence-laden tasks.

Implications for the Coverage Gap Dashboard

Article 5 of this mini-series will construct a Coverage Gap Dashboard that tracks the convergence (or persistence) of these gaps over time. For computer and mathematical occupations, the key metrics to monitor include:

  1. Suggestion acceptance rates for AI coding tools (currently ~30%; approaching 50% would signal a phase transition in trust)
  2. AI-generated code in production as a percentage of total commits (currently estimated at 15–25% across enterprises)
  3. Security incident attribution to AI-generated code (no systematic data exists; its emergence would reshape the liability landscape)
  4. Architectural AI tool adoption (currently near zero for consequential decisions; tools like Anthropic’s extended thinking and OpenAI’s o-series models are beginning to enter this space)

Conclusions

The 33% coverage figure for computer and mathematical occupations is not evidence of slow adoption. It is the market-clearing price of a rational calculation: adopt AI where verification is cheap and liability is low, and maintain human control where errors are expensive and consequences are long-lived. This interpretation has three implications for AI economics: First, aggregate AI exposure metrics overstate displacement risk. The 94% theoretical exposure for computer and math occupations has generated headlines about imminent job loss, but the 33% observed coverage — and the structural forces maintaining the 61-point gap — suggest that displacement will be gradual, selective, and concentrated in verifiable-output tasks. Second, the gap is not primarily a technology problem. Better models will narrow it at the margins, but the core barriers — verification costs, liability asymmetry, and architectural irreducibility — are economic and organizational, not computational. Closing the gap requires institutional innovation (AI liability insurance, automated verification frameworks, integrated toolchains) as much as model improvement. Third, the expertise paradox means that the occupation group most exposed to AI is also the occupation group most likely to benefit from it. The BLS projects 10.1% growth — not despite AI exposure, but because AI adoption creates demand for the human judgment needed to direct, verify, and integrate AI output. The 33% is not a threat to computer and mathematical occupations. It is the foundation of their continued relevance.

Series: Capability-Adoption Gap Mini-Series (#4 of 5) Next: Article 5 — Velocity Matters: Building the Coverage Gap Dashboard Previous: The Legal 15%: Liability Is Not a Technical Problem


Oleh Ivchenko is a PhD candidate in Economic Cybernetics at Odessa National Polytechnic University and Innovation Tech Lead at Capgemini Engineering. His research focuses on ML-driven decision frameworks for enterprise economics.

← Previous
Frontier AI Consolidation Economics: Why the Big Get Bigger
Next →
Next article coming soon
All AI Economics articles (49)49 / 49
Version History · 1 revisions
+
RevDateStatusActionBySize
v0Mar 15, 2026CURRENTFirst publishedAuthor19261 (+19261)

Versioning is automatic. Each revision reflects editorial updates, reference validation, or formatting changes.

Recent Posts

  • The Computer & Math 33%: Why the Most AI-Capable Occupation Group Still Automates Only a Third of Its Tasks
  • Frontier AI Consolidation Economics: Why the Big Get Bigger
  • Silicon War Economics: The Cost Structure of Chip Nationalism
  • Enterprise AI Agents as the New Insider Threat: A Cost-Effectiveness Analysis of Autonomous Risk
  • Policy Implications and a Decision Framework for Shadow Economy Reduction in Ukraine

Recent Comments

  1. Oleh on Google Antigravity: Redefining AI-Assisted Software Development

Archives

  • March 2026
  • February 2026

Categories

  • ai
  • AI Economics
  • AI Observability & Monitoring
  • AI Portfolio Optimisation
  • Ancient IT History
  • Anticipatory Intelligence
  • Capability-Adoption Gap
  • Cost-Effective Enterprise AI
  • Future of AI
  • Geopolitical Risk Intelligence
  • hackathon
  • healthcare
  • HPF-P Framework
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Open Humanoid
  • Research
  • Shadow Economy Dynamics
  • Spec-Driven AI Development
  • Technology
  • Uncategorized
  • Universal Intelligence Benchmark
  • War Prediction

About

Stabilarity Research Hub is dedicated to advancing the frontiers of AI, from Medical ML to Anticipatory Intelligence. Our mission is to build robust and efficient AI systems for a safer future.

Language

  • Medical ML Diagnosis
  • AI Economics
  • Cost-Effective AI
  • Anticipatory Intelligence
  • Data Mining
  • 🔑 API for Researchers

Connect

Facebook Group: Join

Telegram: @Y0man

Email: contact@stabilarity.com

© 2026 Stabilarity Research Hub

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme
Stabilarity Research Hub

Open research platform for AI, machine learning, and enterprise technology. All articles are preprints with DOI registration via Zenodo.

185+
Articles
8
Series
DOI
Archived

Research Series

  • Medical ML Diagnosis
  • Anticipatory Intelligence
  • Intellectual Data Analysis
  • AI Economics
  • Cost-Effective AI
  • Spec-Driven AI

Community

  • Join Community
  • MedAI Hack
  • Zenodo Archive
  • Contact Us

Legal

  • Terms of Service
  • About Us
  • Contact
Operated by
Stabilarity OÜ
Registry: 17150040
Estonian Business Register →
© 2026 Stabilarity OÜ. Content licensed under CC BY 4.0
Terms About Contact
Language: 🇬🇧 EN 🇺🇦 UK 🇩🇪 DE 🇵🇱 PL 🇫🇷 FR
Display Settings
Theme
Light
Dark
Auto
Width
Default
Column
Wide
Text 100%

We use cookies to enhance your experience and analyze site traffic. By clicking "Accept All", you consent to our use of cookies. Read our Terms of Service for more information.