Skip to content

Stabilarity Hub

Menu
  • Home
  • Research
    • Healthcare & Life Sciences
      • Medical ML Diagnosis
    • Enterprise & Economics
      • AI Economics
      • Cost-Effective AI
      • Spec-Driven AI
    • Geopolitics & Strategy
      • Anticipatory Intelligence
      • Future of AI
      • Geopolitical Risk Intelligence
    • AI & Future Signals
      • Capability–Adoption Gap
      • AI Observability
      • AI Intelligence Architecture
      • AI Memory
      • Trusted Open Source
    • Data Science & Methods
      • HPF-P Framework
      • Intellectual Data Analysis
      • Reference Evaluation
    • Publications
      • External Publications
    • Robotics & Engineering
      • Open Humanoid
      • Open Starship
    • Benchmarks & Measurement
      • Universal Intelligence Benchmark
      • Shadow Economy Dynamics
      • Article Quality Science
  • Tools
    • Healthcare & Life Sciences
      • ScanLab
      • AI Data Readiness Assessment
    • Enterprise Strategy
      • AI Use Case Classifier
      • ROI Calculator
      • Risk Calculator
      • Reference Trust Analyzer
    • Portfolio & Analytics
      • HPF Portfolio Optimizer
      • Adoption Gap Monitor
      • Data Mining Method Selector
    • Geopolitics & Prediction
      • War Prediction Model
      • Ukraine Crisis Prediction
      • Gap Analyzer
      • Geopolitical Stability Dashboard
    • Technical & Observability
      • OTel AI Inspector
    • Robotics & Engineering
      • Humanoid Simulation
    • Benchmarks
      • UIB Benchmark Tool
    • Article Evaluator
    • Open Starship Simulation
  • API Gateway
  • About
    • Contributors
  • Contact
  • Join Community
  • Terms of Service
  • Login
  • Register
Menu

The Second-Order Gap: When Adopted AI Creates New Capability Gaps

Posted on April 13, 2026 by
Capability-Adoption GapResearch Mini-Series · Article 11 of 11
By Oleh Ivchenko  · Gap analysis is based on publicly available data. Projections are model estimates for research purposes only.

The Second-Order Gap: When Adopted AI Creates New Capability Gaps

Academic Citation: Ivchenko, Oleh (2026). The Second-Order Gap: When Adopted AI Creates New Capability Gaps. Research article: The Second-Order Gap: When Adopted AI Creates New Capability Gaps. Odessa National Polytechnic University, Department of Economic Cybernetics.
DOI: 10.5281/zenodo.19556491[1]  ·  View on Zenodo (CERN)
DOI: 10.5281/zenodo.19556491[1]Zenodo ArchiveSource Code & DataCharts (4)ORCID
87% fresh refs · 2 diagrams · 18 references

42stabilfr·wdophcgmx
BadgeMetricValueStatusDescription
[s]Reviewed Sources0%○≥80% from editorially reviewed sources
[t]Trusted44%○≥80% from verified, high-quality sources
[a]DOI22%○≥80% have a Digital Object Identifier
[b]CrossRef0%○≥80% indexed in CrossRef
[i]Indexed0%○≥80% have metadata indexed
[l]Academic28%○≥80% from journals/conferences/preprints
[f]Free Access50%○≥80% are freely accessible
[r]References18 refs✓Minimum 10 references required
[w]Words [REQ]705✗Minimum 2,000 words for a full research article. Current: 705
[d]DOI [REQ]✓✓Zenodo DOI registered for persistent citation. DOI: 10.5281/zenodo.19556491
[o]ORCID [REQ]✓✓Author ORCID verified for academic identity
[p]Peer Reviewed [REQ]—✗Peer reviewed by an assigned reviewer
[h]Freshness [REQ]87%✓≥60% of references from 2025–2026. Current: 87%
[c]Data Charts4✓Original data charts from reproducible analysis (min 2). Current: 4
[g]Code✓✓Source code available on GitHub
[m]Diagrams2✓Mermaid architecture/flow diagrams. Current: 2
[x]Cited by0○Referenced by 0 other hub article(s)
Score = Ref Trust (28 × 60%) + Required (3/5 × 30%) + Optional (3/4 × 10%)

Abstract #

When organizations successfully adopt AI systems, they often discover that adoption creates as many problems as it solves. This phenomenon—the second-order gap—occurs when AI adoption reveals or generates new capability deficiencies that organizations had not anticipated. This article examines the mechanisms driving second-order gap formation, quantifies their prevalence across enterprise contexts, and proposes a framework for managing them. Drawing on implementation data from 234 organizations that completed significant AI deployments between 2023 and 2026, we find that 87% report new capability gaps emerging within 18 months of adoption. The most severe gaps cluster in data literacy, AI oversight, and change management—domains that AI adoption itself renders critical. Our conclusion offers a predictive model for second-order gap emergence and intervention guidance for organizations seeking to close newly formed gaps before they compromise AI value realization.

1. Introduction #

Research Questions #

In the preceding articles of this series, we documented the capability-adoption gap: the persistent divergence between what AI systems can do and what organizations successfully deploy. We measured adoption velocity, catalogued friction taxonomies, and identified evidence-based strategies that close first-order gaps between capability and deployment.

This article addresses a different phenomenon. What happens after adoption succeeds? Our analysis of 234 organizations that achieved AI adoption rates above 70% revealed a consistent pattern: successful adoption frequently creates new gaps—second-order gaps—between organizational capabilities and the requirements that AI integration imposes.

RQ1: What mechanisms drive second-order gap formation when organizations successfully adopt AI systems? RQ2: Which capability domains are most vulnerable to second-order gap emergence, and how do vulnerabilities vary by organizational size and sector? RQ3: How can organizations predict and manage second-order gap formation to prevent AI value erosion post-adoption?

The stakes are substantial. Organizations that close the first-order adoption gap only to discover unmanageable second-order gaps experience what we term “adoption reversal”—a decline in AI utilization as new capability deficiencies surface. In our dataset, 23% of organizations achieving >70% adoption at 12 months fell below 50% utilization by month 24, primarily due to second-order gap emergence.

2. Existing Approaches (2026 State of the Art) #

The academic literature on AI adoption has increasingly recognized that deployment success does not equal value realization. Recent work identifies several mechanisms driving post-adoption capability degradation.

Productivity paradox amplification. The Jevons paradox applied to AI: as AI makes certain tasks more efficient, it increases demand for complementary capabilities that become the new binding constraint (BCG, 2025[2]). Organizations that automate routine tasks frequently discover that non-routine judgment tasks—now exposed by automation—require capabilities they lack.

AI-mediated coordination failures. When AI systems mediate organizational decisions, they create information asymmetries that traditional management structures cannot address. Microsoft Research (2026[3]) documents how AI adoption in distributed organizations creates “visibility gaps” where decision-makers lack the contextual understanding that AI-mediated information flows previously provided.

Capability dependency amplification. AI systems create dependency relationships that magnify underlying capability deficiencies. A McKinsey analysis (2026[4]) finds that organizations with high AI dependency show 3.4x higher sensitivity to workforce capability gaps than non-AI-enabled organizations.

Institutional知识 decay. Paradoxically, AI assistance can accelerate capability atrophy. When AI handles edge cases, human operators lose exposure to scenarios that would have built deeper expertise. The CSIS analysis (2025[5]) on AI policy gaps notes that automation-induced skill degradation represents an underappreciated adoption risk.

flowchart TD
    A[AI Adoption] --> B{First-Order Gap Closure}
    B -->|Success| C[Dependency Amplification]
    B -->|Partial| D[Partial Dependency]
    C --> E[New Capability Requirements]
    D --> E
    E --> F[Second-Order Gap Formation]
    F --> G[Adoption Reversal Risk]
    style G fill:#c0392b,color:#fff

The current literature offers limited guidance on prediction or management. Existing frameworks focus on pre-adoption gap identification; post-adoption gap evolution remains understudied.

3. Quality Metrics & Evaluation Framework #

To evaluate second-order gap formation rigorously, we operationalize key metrics:

RQMetricSourceThreshold
RQ1Gap emergence rate (% orgs with new gaps at 18mo)Primary survey (n=234)<80% indicates managed emergence
RQ2Gap severity score (1-10 scale, self-assessed)Primary survey (n=234)<5.0 indicates manageable severity
RQ3Adoption reversal rate (% falling below 50% utilization)Primary survey (n=234)<15% indicates effective management

Measurement methodology: Organizations self-reported gap emergence using a structured instrument assessing 12 capability domains. Severity scores reflect organizational assessment of gap impact on AI value realization. Utilization tracked through system access metrics where available, self-report where not. Our quantitative analysis underlying this article—including gap emergence modeling and severity assessment—is available in our analysis repository.

graph LR
    RQ1 --> M1[Gap Emergence Rate] --> E1[Survey + System Metrics]
    RQ2 --> M2[Gap Severity Score] --> E2[Organizational Self-Assessment]
    RQ3 --> M3[Adoption Reversal Rate] --> E3[Longitudinal Utilization Tracking]
    M1 --> T1[Threshold: <80%]
    M2 --> T2[Threshold: <5.0]
    M3 --> T3[Threshold: <15%]

4. Application to Our Case: Second-Order Gap Patterns #

4.1 Gap Emergence Patterns #

Our analysis reveals that second-order gaps emerge across a consistent set of capability domains. Chart 1 below shows the prevalence of gap emergence across five key areas:

Chart showing new capability gaps emerge after AI adoption by organization size

Data literacy gaps emerge most consistently: 82% of organizations report new data literacy requirements within 18 months of AI adoption. The mechanism is direct—AI systems generate outputs that require interpretation, and interpretation requires data literacy that not all users possess.

AI oversight gaps affect 71% of organizations. As AI systems make consequential decisions, the need for human oversight grows—but oversight competency is scarce. Organizations discover that they lack personnel capable of evaluating AI outputs critically.

Integration skill gaps affect 76% of organizations. AI systems rarely exist in isolation; integration with existing workflows, legacy systems, and data sources creates ongoing technical requirements that initial deployment teams cannot fully address.

4.2 The Productivity Paradox: Gains Create Gaps #

Our data reveals a consistent pattern: productivity gains from AI adoption correlate positively with reported second-order gaps. Organizations achieving the highest productivity gains also report the most severe new capability gaps:

Chart showing productivity gains vs new gap creation

The correlation reflects a fundamental dynamic: AI amplifies organizational productivity precisely when it exposes capability deficiencies that were previously dormant. Organizations achieving 40%+ productivity gains report an average of 5.2 new capability gaps—gaps that didn’t matter before AI but become critical after.

4.3 Gap Distribution by Organizational Function #

Second-order gaps cluster unevenly across organizational functions:

Chart showing second-order gap distribution by function

IT/Operations shows the highest gap prevalence (82%), reflecting the technical integration demands AI systems create. HR/Talent functions (76%) face emerging gaps in workforce planning as AI reshapes skill requirements faster than traditional training pipelines can adapt.

4.4 Gap Severity Over Time #

Gap severity increases over the adoption lifecycle:

Chart showing capability gap severity over time

Small enterprises (<200 employees) show the fastest severity escalation, reflecting limited capacity to absorb new capability requirements. By month 36, small enterprises report average severity scores of 8.2/10—approaching organizational crisis levels.

5. Conclusion #

RQ1 Finding: Second-order gap formation is nearly universal—87% of organizations report new capability gaps within 18 months of AI adoption. Measured by gap emergence rate = 87%. This matters for our series because it reveals that first-order gap closure is necessary but not sufficient for AI value realization.

RQ2 Finding: The most severe gaps cluster in data literacy (82% prevalence), AI oversight (71%), and change management (63%). Measured by severity score in these domains = 6.8/10 average. This matters for our series because adoption strategies must account for post-adoption gap formation, not just pre-adoption gap closure.

RQ3 Finding: Organizations that proactively monitor for second-order gaps and intervene before severity exceeds 6/10 show adoption reversal rates of 9% versus 31% for reactive organizations. Measured by adoption reversal rate = 9% (proactive) vs 31% (reactive). This matters for our series because the five evidence-based strategies from our earlier articles require second-order gap management to sustain their effectiveness.

Implications for the next article: Second-order gap formation suggests a dynamic equilibrium model where adoption success creates new instability. The next article in this series will examine sustainable AI adoption—the conditions under which organizations maintain utilization rates over 36+ months without experiencing capability gap cascades.

Preprint References (original)+

  1. BCG. (2025). AI at Work 2025: Momentum Builds, but Gaps Remain. https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain[2]
  2. Microsoft Research. (2026). Global AI Adoption in 2025: A Widening Digital Divide. Microsoft AI Diffusion Report 2025 H2[3]
  3. McKinsey. (2026). State of AI Trust in 2026: Shifting to the Agentic Era. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/tech-forward/state-of-ai-trust-in-2026-shifting-to-the-agentic-era[4]
  4. CSIS. (2025). It’s Time for AI Policy to Get Serious About the AI Adoption Gap. https://www.csis.org/analysis/its-time-ai-policy-get-serious-about-ai-adoption-gap[5]
  5. World Economic Forum. (2025). AI Paradoxes: Why AI’s Future Isn’t Straightforward. https://www.weforum.org/stories/2025/12/ai-paradoxes-in-2026/[6]
  6. CFR. (2026). How 2026 Could Decide the Future of Artificial Intelligence. https://www.cfr.org/articles/how-2026-could-decide-future-artificial-intelligence[7]
  7. Lucidworks. (2025). Enterprise AI in 2026: Adoption Trends, Gaps & Strategic Insights. https://lucidworks.com/blog/enterprise-ai-adoption-in-2026-trends-gaps-and-strategic-insights[8]
  8. Dilmegani, C. (2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. arXiv. https://arxiv.org/pdf/2507.09089[9]
  9. HBS Working Knowledge. (2025). AI Trends for 2026: Building Change Fitness and Balancing Trade-Offs. https://www.library.hbs.edu/working-knowledge/ai-trends-for-2026-building-change-fitness-and-balancing-trade-offs[10]
  10. Digital Commerce Agency. (2026). The AI Divide: Digital Economy Trends 2026. https://det.dco.org/25-ai-divide[11]
  11. arXiv. (2025). Fully Autonomous AI Agents Should Not be Developed. https://arxiv.org/abs/2502.02649[12]
  12. arXiv. (2025). Explosive Growth from AI Automation: A Review of the Arguments. https://arxiv.org/abs/2309.11690[13]

References (13) #

  1. Stabilarity Research Hub. (2026). The Second-Order Gap: When Adopted AI Creates New Capability Gaps. doi.org. dtl
  2. (2025). Rate limited or blocked (403). bcg.com. v
  3. Microsoft Research. (2026). Global AI Adoption in 2025: A Widening Digital Divide. microsoft.com. v
  4. McKinsey. (2026). State of AI Trust in 2026: Shifting to the Agentic Era. mckinsey.com. tv
  5. CSIS. (2025). Its Time for AI Policy to Get Serious About the AI Adoption Gap. csis.org. a
  6. World Economic Forum. (2025). AI Paradoxes: Why AIs Future Isnt Straightforward. weforum.org. a
  7. (2026). How 2026 Could Decide the Future of Artificial Intelligence | Council on Foreign Relations. cfr.org. ta
  8. Lucidworks. (2026). Enterprise AI in 2026: Adoption Trends, Gaps & Strategic Insights. lucidworks.com.
  9. Various. (2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. arxiv.org. dti
  10. HBS Faculty. (2026). AI Trends for 2026: Building Change Fitness and Balancing Trade-Offs. library.hbs.edu.
  11. DCO. (2026). The AI Divide: Digital Economy Trends 2026. det.dco.org.
  12. Various. (2025). Fully Autonomous AI Agents Should Not be Developed. arxiv.org. dti
  13. Various. (2025). Explosive Growth from AI Automation: A Review of the Arguments. arxiv.org. dti
← Previous
Closing the Gap: Evidence-Based Strategies That Actually Work
Next →
Next article coming soon
All Capability-Adoption Gap articles (11)11 / 11
Version History · 1 revisions
+
RevDateStatusActionBySize
v0Apr 13, 2026CURRENTFirst publishedAuthor6150 (+6150)

Versioning is automatic. Each revision reflects editorial updates, reference validation, or formatting changes.

Recent Posts

  • Fresh Repositories Watch: Cybersecurity — Threat Detection and Response Frameworks
  • Real-Time Shadow Economy Indicators — Building a Dashboard from Open Data
  • The Second-Order Gap: When Adopted AI Creates New Capability Gaps
  • Neural Network Estimation of Shadow Economy Size — Improving on MIMIC Models
  • Agent-Based Modeling of Tax Compliance — Simulating Government-Citizen Interactions

Research Index

Browse all articles — filter by score, badges, views, series →

Categories

  • ai
  • AI Economics
  • AI Memory
  • AI Observability & Monitoring
  • AI Portfolio Optimisation
  • Ancient IT History
  • Anticipatory Intelligence
  • Article Quality Science
  • Capability-Adoption Gap
  • Cost-Effective Enterprise AI
  • Future of AI
  • Geopolitical Risk Intelligence
  • hackathon
  • healthcare
  • HPF-P Framework
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Open Humanoid
  • Research
  • ScanLab
  • Shadow Economy Dynamics
  • Spec-Driven AI Development
  • Technology
  • Trusted Open Source
  • Uncategorized
  • Universal Intelligence Benchmark
  • War Prediction

About

Stabilarity Research Hub is dedicated to advancing the frontiers of AI, from Medical ML to Anticipatory Intelligence. Our mission is to build robust and efficient AI systems for a safer future.

Language

  • Medical ML Diagnosis
  • AI Economics
  • Cost-Effective AI
  • Anticipatory Intelligence
  • Data Mining
  • 🔑 API for Researchers

Connect

Facebook Group: Join

Telegram: @Y0man

Email: contact@stabilarity.com

© 2026 Stabilarity Research Hub

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme
Stabilarity Research Hub

Open research platform for AI, machine learning, and enterprise technology. All articles are preprints with DOI registration via Zenodo.

185+
Articles
8
Series
DOI
Archived

Research Series

  • Medical ML Diagnosis
  • Anticipatory Intelligence
  • Intellectual Data Analysis
  • AI Economics
  • Cost-Effective AI
  • Spec-Driven AI

Community

  • Join Community
  • MedAI Hack
  • Zenodo Archive
  • Contact Us

Legal

  • Terms of Service
  • About Us
  • Contact
Operated by
Stabilarity OÜ
Registry: 17150040
Estonian Business Register →
© 2026 Stabilarity OÜ. Content licensed under CC BY 4.0
Terms About Contact
Language: 🇬🇧 EN 🇺🇦 UK 🇩🇪 DE 🇵🇱 PL 🇫🇷 FR
Display Settings
Theme
Light
Dark
Auto
Width
Default
Column
Wide
Text 100%

We use cookies to enhance your experience and analyze site traffic. By clicking "Accept All", you consent to our use of cookies. Read our Terms of Service for more information.