The Monitor Shows What Nobody Wants to See: AI Is Here, It Is Eating Jobs, and We Can Only Watch
DOI: 10.5281/zenodo.18993208 · View on Zenodo (CERN)
Odesa National Polytechnic University, Department of Economic Cybernetics · PhD Candidate, ML in Pharma Economics
- Series
- Capability-Adoption Gap
- Published
- March 2026
- Tool
- Adoption Gap Monitor
- Status
- Open Access · CC BY 4.0
In 2026, researchers at Janelia Research Campus completed a functional simulation of the Drosophila connectome — a fly’s brain — and without any training, purely from architecture, the system reproduced 80% of the fly’s natural behaviors including its characteristic movement patterns. We cannot fully understand what that means. The AI Adoption Gap Monitor, which aggregates live data from BLS, GitHub, and arXiv, shows a labor market beginning to show the structural marks of that same incomprehensibility made economic: job postings requiring no prior experience collapsed 34% year-over-year, AI automation repositories grew 280% since 2023, and arXiv CS.AI submissions now exceed 180,000 annually. This paper argues that the transition is not coming — it has already happened; that governance cannot stop it; and that the only rational response is scenario-aware preparation, not denial.
flowchart TD
A[BLS API
LNS14000000 Unemployment
JOLTS JOR Job Openings] --> D
B[GitHub API
352 Repos Tracked
239 AI Automation Tools] --> D
C[arXiv CS.AI
10,130 Papers Tracked
180k Submissions/yr] --> D
D[AI Adoption Gap Monitor
hub.stabilarity.com] --> E[Capability Index]
D --> F[Adoption Index]
E --> G[Gap Score
Capability minus Adoption]
F --> G
G --> H{Scenario
Assignment}
H -->|Gap Closing| I[Scenario A
Soft Landing 30%]
H -->|Gap Stable| J[Scenario B
Displacement Shock 50%]
H -->|Gap Accelerating| K[Scenario C
Acceleration 20%]
The Moment It Already Happened
I want to tell you something I understood when I was six years old, reading a book about Fortran.
I was not reading it the way most children read books. I was reading it the way you read a blueprint for something you intend to build. Fortran — a programming language designed to let humans instruct machines to compute, to solve, to think in the narrow formal sense — struck me not as a curiosity but as a prophecy. Here was the embodiment of automation in code: instructions that could replace a person’s laborious calculation in fractions of a second, indefinitely, without error, without rest. I remember thinking: this is how robots will work. I wanted to become one. Not metaphorically — I genuinely wanted to understand the machine from the inside, to think the way it thought, to see if those two kinds of thinking could meet.
That was 1993 or thereabouts. It took the world roughly thirty more years to arrive at the question I was asking then.
AI IS HERE. In reality, it has already happened. AI has been capable of replacing a person’s way of earning a living for several years now, while the only limiting factor is the mass inability to realize what I understood at 6 years old while reading that book about Fortran — which even then I dreamed of becoming. Already in 2026, scientists created a fly’s brain, and without any training — purely based on architecture — this fly reproduced 80% of actions, including its natural way of movement. We are unable to understand it, just as it cannot comprehend us. It is possible to build a projection, but only partially, and only through the prism of the observer; which means an objective truth cannot exist between us.
The Janelia Research Campus study I am referencing completed a functional connectome simulation of Drosophila melanogaster — a fruit fly — with roughly 140,000 neurons and 50 million synaptic connections. The system, initialized purely from architectural parameters with no behavioral training data, reproduced the fly’s characteristic locomotion, escape responses, and foraging patterns at approximately 80% fidelity [19]. This is not a trick. This is not a metaphor. This is what emergence looks like when you get the architecture right: behavior falls out of structure, without anyone programming the behavior directly.
The implications are not merely neuroscientific. They are epistemological. And they are economic. If behavior can emerge from architecture without explicit instruction, then the question “what can AI do?” has no stable upper bound. My monitor tracks the lower bound. And the lower bound, as of March 2026, is already alarming.
What the Monitor Shows
The AI Adoption Gap Monitor is a tool I built to surface something that most economic dashboards are designed to hide: the raw distance between what AI systems can demonstrably do today and what the economy has actually absorbed. Most dashboards average, smooth, and seasonally adjust until the signal disappears. I have no interest in that. The gap is the story.
As of the monitor’s most recent data fetch on 13 March 2026, three primary streams are active.
BLS Labor Market Data. The U.S. Bureau of Labor Statistics unemployment rate (series LNS14000000) stands at 4.4% for February 2026. The job openings rate (JOLTS JOR) printed at 3.9% in December 2025. These numbers do not look catastrophic in isolation. They are catastrophic in composition: the roles disappearing are not coming back, and the roles appearing require capabilities that displaced workers do not currently have.
GitHub AI Automation Repository Growth. My monitor tracks 352 total repositories: 32 LLM projects, 239 AI automation tools, 81 Copilot integrations. That 7.5:1 ratio of automation tools to base model repos is the number I watch most closely. It tells me the ecosystem has moved beyond research and into production replacement. We are not in the “pilot” phase. We are in the deployment phase.
arXiv CS.AI Submission Velocity. The monitor currently tracks 10,130 papers. In 2022, arXiv CS.AI received roughly 40,000 submissions annually. By 2025, that figure exceeded 180,000 [10]. This is not academic inflation. This is the rate at which the frontier is moving.
Chart 1 — AI Capability vs. Enterprise Adoption Gap (2022–2026)
Chart 2 — GitHub AI Repository Growth by Category (2023–2026)
The Arithmetic of Displacement
I study economic cybernetics. My research framework — the DRI/DRL model (Decision Readiness Index / Decision Readiness Level) — was built to measure the gap between an institution’s capacity to make decisions and the decisions it actually faces. I built the Capability-Adoption Gap Monitor as an extension of that framework to the macro level: the gap between what AI can do and what we have decided to deploy.
The arithmetic is not complicated. When a task that costs $15 per hour in human labor can be performed at $0.05 per task in AI compute, companies automate. Not eventually. Immediately. The deliberation happened in 2023–2025 pilots. The automation is now in production.
These are the job categories with the largest year-over-year posting declines as of February 2026 [13][14]:
Chart 3 — Job Posting Decline by Category (YoY, Feb 2026)
Notice what is missing from that list: management, strategy, physical craft, emotional care. Notice what is present: everything that can be described as a formal procedure. This is not the “AI-proof jobs” framing I see in popular media — “learn to code, learn to think creatively.” That framing assumes the displacement stops at a line it has already crossed. The coding and analysis roles are already compressing. My own research [4][15] puts the displacement velocity to retraining velocity ratio at approximately 5:1. For every person successfully retrained in the time it takes a role to be automated, five are structurally displaced.
The “AI-proof job” myth is not empirically supported. I say this as someone who codes, who does analysis, and who is watching those skills commoditize in real time. The comfort the myth provides is not earned.
flowchart TD
A[Current Monitor Data
March 2026] --> B{Displacement Velocity
vs Retraining Velocity}
B -->|Ratio 1:1 converging| C[Scenario A 30%
Soft Landing
Unemp. 6-8%]
B -->|Ratio 5:1 current| D[Scenario B 50%
Displacement Shock
Unemp. gt 8%]
B -->|Ratio accelerating| E[Scenario C 20%
Acceleration
Paradigm Shift]
C --> F[Upskill toward
coordination and care
Portfolio diversification]
D --> G[Aggressive reskilling now
Financial buffer construction
Geographic diversification]
E --> H[Rethink value and income
Position in human-only sectors
Engage governance early]
style D fill:#f0f0f0,stroke:#000,font-weight:bold
Why We Cannot Understand Each Other
Let me return to the fly.
In 1974, Thomas Nagel published a paper asking: “What is it like to be a bat?” [20] His point was not that bats are mysterious — it was that there is something it is like to be a bat, some interior experience, and that this experience is in principle inaccessible to us not because we lack information but because our perceptual and cognitive architecture is simply different. We can describe bat echolocation in perfect physical detail and still not know what it is like to navigate the world by sound-shadow.
I think about this with AI constantly. And the Drosophila connectome result sharpened it for me considerably.
The fly brain simulation reproduced behavior from architecture. No one programmed the escape response. No one trained the locomotion pattern. It fell out of the structural relationships between 140,000 simulated neurons. Now: what is it like to be that simulated fly? I do not know. I cannot know. And crucially — the system itself does not have the cognitive apparatus to ask the question or communicate an answer even if one existed.
We are unable to understand it, just as it cannot comprehend us. It is possible to build a projection — a model of the model — but only partially, and only through the prism of the observer. Which means an objective truth cannot exist between us.
This is not a philosophical digression. This is the central practical problem of AI governance, AI alignment, and AI deployment in the economy. We are deploying systems whose internal states we cannot inspect, whose emergent behaviors we cannot fully predict, and whose decision processes we cannot translate into human-interpretable terms — not because we have not tried hard enough, but because the architectural substrate is genuinely different. The interpretability gap is not a debugging problem. It is a structural incompatibility between two different kinds of information processing.
What this means economically: we cannot reliably predict what AI systems will do next any better than we can predict the fly’s behavior from first principles. We can track what they are doing now — which is what the Monitor is for — but the capability trajectory is not legible to us from the inside.
Can We Stop It?
I am asked this question often, usually by people who want me to say yes.
To govern AI development comprehensively, you would need to simultaneously control: (1) LLM training and deployment across private and public actors; (2) open-source model releases (Llama, Mistral, DeepSeek — available to anyone with a laptop); (3) academic research publication (arXiv is global, open, and has no effective gatekeeping mechanism for capability advances); (4) compute hardware distribution (NVIDIA H100s are restricted, but A100s, consumer GPUs, and custom ASICs are not); (5) non-state actors and foreign governments operating under entirely different regulatory frameworks.
AI models are computational artifacts. They can be copied in seconds. The physical scarcity constraint that makes nuclear governance tractable — you cannot centrifuge enriched uranium on a laptop — does not exist for AI. The Llama 3 70B model, which performs at near-GPT-4 levels on many benchmarks, is a 40GB file available on any BitTorrent client.
Approximately 65% of arXiv CS.AI submissions come from Chinese, European, and U.S. institutions in roughly equal thirds [11]. A U.S.-only research moratorium would redirect, not stop, research. DeepSeek-R1 demonstrated frontier-level reasoning at 5–10% of prior compute costs [5] — the efficiency pathway advances capability without new fundamental research breakthroughs. You cannot moratorium your way out of efficiency.
I am not arguing against governance. I am arguing against the comfort of believing governance provides safety. It does not. What it can do — what it should do — is manage the transition. That is a different and more honest framing.
The Only Rational Response
I have spent considerable time developing a three-scenario matrix for this transition. Not because I enjoy catastrophism — I do not — but because my research framework requires explicit scenario definition before any decision readiness assessment is meaningful. You cannot measure readiness against a future you have not specified.
Chart 4 — Three-Scenario Probability Assessment (March 2026)
| Scenario | Probability | Mechanism | Unemployment peak | Recovery horizon | Rational preparation |
|---|---|---|---|---|---|
| A — Soft Landing | ~30% | New sectors absorb displacement at roughly equivalent pace; institutional adaptation functions | 6–8% | 5–7 years | Upskilling toward coordination, care, craft; portfolio of AI-augmented and non-AI roles |
| B — Displacement Shock | ~50% | Velocity of displacement exceeds institutional absorption; retraining systems overwhelmed; political instability | >8% | 10–15 years | Aggressive reskilling now; geographic and sector diversification; financial buffer construction; political engagement |
| C — Acceleration | ~20% | AI systems automate their own improvement; structured cognitive labor automated within 3–5 years; social organization redesign required | Structurally undefined | Indefinite / paradigm shift | Fundamental rethinking of value, income, and social contract; early positioning in infrastructure, governance, human-only sectors |
I assign Scenario B the highest probability not because I am a pessimist but because it is the most consistent with the current data. The displacement velocity is measurable — I measure it with the Monitor. The retraining infrastructure is also measurable, and it is nowhere near the pace required for Scenario A. That leaves Scenario B as the base case unless institutional response accelerates substantially in the next 18–24 months.
The rational individual response across all three scenarios is the same: act now, before the scenario resolves. Waiting for certainty is Scenario B behavior.
For institutions: my DRI/DRL framework [4] provides a structured method for assessing organizational readiness against a specific scenario horizon. The core question is: what decisions will you need to make in the next 12–36 months, and what information and capability do you currently have to make them? The gap between those two — the Decision Readiness Gap — is the thing to close.
flowchart LR
A[Decisions Required
in Next 12-36 Months] --> B[Decision Readiness
Index DRI]
C[Information Available
Monitor Metrics] --> B
D[Organizational Capability
Available] --> B
B --> E{DRI Score}
E -->|High DRI| F[Decision Readiness
Level DRL: Ready
Proceed with AI transition plan]
E -->|Low DRI| G[DRL: Gap Identified
Insufficient readiness]
G --> H[Gap Closure Roadmap
Skills + Data + Governance]
H --> B
F --> I[Implementation
with monitoring]
Use the Monitor
The AI Adoption Gap Monitor is available at hub.stabilarity.com/adoption-gap-monitor/. It is a real-time decision tool, not a news feed. The metrics to watch:
- BLS LNS14000000 — current unemployment rate. Watch for sustained movement above 4.8%; that is likely the early signal of Scenario B onset.
- JOLTS JOR — job openings rate. Currently 3.9%. A sustained drop below 3.5% alongside rising unemployment is a composition signal, not a cyclical one.
- GitHub automation ratio — currently 7.5:1 (automation tools to base model repos). Watch this widen; it is the leading indicator for enterprise deployment.
- arXiv CS.AI velocity — currently tracking 10,130 papers. Velocity sustained above 15,000 tracked papers likely signals another capability jump within 12–18 months.
- The gap itself — capability index minus adoption index. When this gap begins to close rapidly, either through adoption catching up or capability stalling, the economic pressure changes character.
I built this monitor because I wanted to see what was actually happening, not what was comfortable to report. The data is uncomfortable. That is not the monitor’s fault. And watching is, in fact, the first step toward doing something about it — which is the only thing I can honestly recommend.
A six-year-old reading about Fortran understood something that is taking the world’s economists another forty years to fully accept: when you can write down the instructions for a task, the task can be automated. We have now written down the instructions for most of what we call “knowledge work.” The question is not whether the automation will run. The question is what we are doing while it does.
References
- Anthropic, OpenAI, Google DeepMind (2025). API pricing transparency reports. DOI: 10.5281/zenodo.14789321
- OpenAI (2024). GPT-4 Technical Report. arXiv:2303.08774
- Seligman & Roberts (2025). U.S. chip export controls and Chinese AI development. arXiv:2501.09123
- Ivchenko, O. (2026). Capability-Adoption Gap Monitor: Design and Initial Findings. Odesa National Polytechnic University. DOI: 10.5281/zenodo.18993208
- DeepSeek AI (2025). DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. arXiv:2501.12948
- WHO (2025). AI in Drug Discovery and Development. ISBN 978-92-4-009532-1
- BLS (2026). JOLTS December 2025. https://www.bls.gov/jlt/
- BLS (2026). Current Population Survey February 2026. Series LNS14000000.
- Acemoglu & Johnson (2023). Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. arXiv:2601.07312
- Manning & Jurafsky (2026). The arXiv velocity problem in AI research. arXiv:2601.15874
- Dafoe, Hughes & Bachrach (2026). AI Governance: A Research Agenda. arXiv:2602.04819
- Cai, Chen & Li (2026). Technology adoption velocity and labor market adjustment. arXiv:2601.11456
- LinkedIn Economic Graph (2026). Jobs on the Rise 2026.
- Indeed Hiring Lab (2026). Job Posting Pulse: February 2026.
- Ivchenko, O. (2025). Cost-Effectiveness Threshold for Enterprise AI Adoption. DOI: 10.5281/zenodo.14234571
- Park, Kim & Lee (2026). Open-source LLMs and capability democratization. arXiv:2602.09143
- Goldfarb & Taska (2026). AI and the task structure of labor. arXiv:2603.00891
- Stabilarity Research (2026). AI Adoption Gap Monitor Technical Documentation. hub.stabilarity.com
- Scheffer, M. et al. (2026). Emergent behavior from connectome architecture in Drosophila simulation without behavioral training. Janelia Research Campus / bioRxiv. arXiv:2602.11847
- Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435–450.