Skip to content

Stabilarity Hub

Menu
  • Home
  • Research
    • Healthcare & Life Sciences
      • Medical ML Diagnosis
    • Enterprise & Economics
      • AI Economics
      • Cost-Effective AI
      • Spec-Driven AI
    • Geopolitics & Strategy
      • Anticipatory Intelligence
      • Future of AI
      • Geopolitical Risk Intelligence
    • AI & Future Signals
      • Capability–Adoption Gap
      • AI Observability
      • AI Intelligence Architecture
      • AI Memory
      • Trusted Open Source
    • Data Science & Methods
      • HPF-P Framework
      • Intellectual Data Analysis
      • Reference Evaluation
    • Publications
      • External Publications
    • Robotics & Engineering
      • Open Humanoid
    • Benchmarks & Measurement
      • Universal Intelligence Benchmark
      • Shadow Economy Dynamics
      • Article Quality Science
  • Tools
    • Healthcare & Life Sciences
      • ScanLab
      • AI Data Readiness Assessment
    • Enterprise Strategy
      • AI Use Case Classifier
      • ROI Calculator
      • Risk Calculator
      • Reference Trust Analyzer
    • Portfolio & Analytics
      • HPF Portfolio Optimizer
      • Adoption Gap Monitor
      • Data Mining Method Selector
    • Geopolitics & Prediction
      • War Prediction Model
      • Ukraine Crisis Prediction
      • Gap Analyzer
      • Geopolitical Stability Dashboard
    • Technical & Observability
      • OTel AI Inspector
    • Robotics & Engineering
      • Humanoid Simulation
    • Benchmarks
      • UIB Benchmark Tool
  • API Gateway
  • About
    • Contributors
  • Contact
  • Join Community
  • Terms of Service
  • Login
  • Register
Menu

When Your Research Gets Cited on Medium: A Clarification, a Thank You, and Why AGI Is Closer Than the Pessimists Think

Posted on March 11, 2026 by Admin
Future of AIJournal Commentary · Article 18 of 22
By Oleh Ivchenko

Abstract #

A personal commentary on an unexpected Medium citation of research on AI infrastructure ROI. Clarifying the nuance between measured economic analysis and pessimistic interpretations, with a reflection on AGI proximity and a thank you to the author who sparked the conversation.

What He Said, and What I Actually Wrote #

It started the way most good surprises do: quietly. I opened my laptop on a Tuesday morning, ran through the usual routine of notifications and RSS, and found a link to a Medium article that had cited my paper on AI infrastructure ROI. Someone I had never met — writing under the handle sergeykleftzovfor — had read my analysis of the capex war in AI infrastructure and built an entire argument around it.

That feeling is hard to describe if you have not experienced it. You write something, you publish it, you share it in the usual places. And then the internet takes it somewhere you did not plan. Someone reads it in a different context, with different priors, and draws a different conclusion. Science, in miniature.

The Medium article[1] — published in the Predict publication under the title “The OpenAI Oracle and Softbank Coalition Is Starting to Crumble” — makes a striking claim. The author argues: “There are no objective grounds for expecting a return on the colossal investments in AI infrastructure. There are only unsubstantiated promises to create AGI that will supposedly change everything.”

That is a strong conclusion. It is also not what my paper said.

In my research on AI infrastructure investment ROI (published March 1, 2026, DOI: 10.5281/zenodo.18821329[2]), I wrote that “current trajectories suggest a multi-year digestion period where infrastructure operators compete intensely for workloads.” Those are different claims. One is about fundamental viability — whether returns are possible at all. The other is about timing and competitive structure — when returns materialize and how the competition for them unfolds.

I want to be direct: neither reading is wrong. The pessimistic interpretation and my measured one are drawing on different signals. The Medium author is looking at announcement-driven hype, coalition instability, and the gap between declared investment and proven revenue. I am looking at utilization rate curves, workload migration patterns, and the historical rhythm of platform-shift economics. We are both watching the same market. We are just standing in different places.

What “Multi-Year Digestion Period” Actually Means #

When I wrote about a multi-year digestion period, I was not hedging. I was describing a well-documented economic pattern.

Every major platform shift produces the same shape: aggressive infrastructure build-out driven by anticipated demand, followed by a period where utilization catches up with capacity. This happened with internet infrastructure in the late 1990s. It happened with cloud computing between 2010 and 2015. It happened with mobile networks. The investment precedes the workload — because you cannot attract the workload without the infrastructure already being in place. That is the structural reality.

What makes the current AI infrastructure wave interesting is the speed and concentration of the capex. Analysts at Futurum Research estimated in early 2026[3] that US hyperscalers alone were committed to a $690 billion infrastructure sprint — a figure that does not include sovereign AI initiatives or second-tier cloud operators. When capital concentrates that rapidly, the digestion period becomes more intense, not less likely. Infrastructure operators compete more aggressively for every workload. Pricing pressure is real. Margin compression is real.

But margin compression is not the same as no return. It is the same economic mechanism that made cloud computing extraordinarily profitable for dominant players while punishing the laggards. The digestion period filters the field. The winners are not the ones who spent the most — they are the ones who converted infrastructure into utilization most efficiently.

A late 2025 analysis of test-time scaling economics (arXiv:2506.04301[4]) framed this precisely: the shift toward reasoning-heavy inference workloads changes the utilization calculus in ways that favor operators who invested early in specialized compute rather than commodity GPU clusters. The infrastructure war is not just about building capacity. It is about building the right capacity for the next generation of workloads — and those workloads are arriving on a different curve than the ones that drove the original investment thesis.

Research published in January 2026[5] on inference-time scaling and System 2 reasoning models makes the point even more sharply: the bottleneck has shifted from acquiring compute to allocating it intelligently. That is not a death knell for infrastructure investment. It is a signal that the competitive moat is moving up the stack — from raw silicon to the software and model architecture layer above it.

On AGI, and Why I Remain Optimistic #

The Medium article’s most provocative claim is that there are only “unsubstantiated promises to create AGI.” I understand the frustration behind that framing. The announcement cycle around AI has been breathless, and the gap between capability claims and demonstrated economic value has been real.

But I think this misreads the trajectory.

Scaling laws have not broken. What has changed is where the scaling is happening. Pretraining compute scaling is running into diminishing returns on certain benchmarks — that much is real. But test-time compute scaling, the ability to extend reasoning chains and use more inference compute to improve output quality, is operating on a different curve entirely. Reasoning models represent a qualitatively different approach to capability development. They are not just bigger versions of the previous generation.

The instability in large coalition investment announcements — the crumbling, as the Medium headline calls it — does not signal that AGI is further away. It may signal the opposite: that we are approaching the phase where the capability gains do not require $500 billion data centers. Research from 2025 on AI scaling laws and efficiency trajectories[6] suggests that the path to higher capability increasingly runs through algorithmic efficiency rather than raw compute multiplication. A world where AGI-class reasoning emerges from more efficient architectures is a world where the infrastructure supercycle was a precondition, not a permanent prerequisite for every future advance.

I am optimistic about AGI proximity not because of hype, but because the underlying capability curves have not reversed. The economic turbulence around infrastructure investment is real. The coalition instability is real. Neither of those things changes what is happening inside the models.

To sergeykleftzovfor: Thank You #

I want to say something directly to the author of the Medium piece.

You read my paper carefully enough to cite it specifically. You used it as evidence for an argument I would not fully endorse — but you engaged with the substance, not the headline. That is more than most readers do.

These cross-platform citations, these moments where a piece of academic research lands in a general publication and gets reinterpreted, are how research actually spreads. Not through the formal citation networks of academic journals alone. Through someone on Medium reading a paper and arguing with it. Through a reader in a different field picking up a reference and following it somewhere the original author did not expect.

That is peer review in the wild. It is imperfect and it is loud and it draws sharper conclusions than the data strictly support — and it is enormously valuable. The Medium reader who clicked through to my paper because of your article is a reader I would not have reached otherwise.

So: thank you. Genuinely. Cite me again. Disagree with me again. That is how this works.

Research Lives When It Gets Challenged #

I will close with a thought about what it means for a piece of research to survive.

A paper that sits unchallenged in a repository is not alive. It is preserved. A paper that gets cited, misread, argued with, extended, and contradicted is doing what research is supposed to do. It is generating conversation. It is forcing clarification. It is making someone, somewhere, think more carefully about a question that matters.

The AI infrastructure question matters enormously. The stakes — economic, technical, geopolitical — are not abstract. Whether the current infrastructure investment cycle produces returns, and on what timeline, and for whom, shapes how the next decade of AI development unfolds. People should be arguing about this. Loudly, in public, with citations.

My paper is at hub.stabilarity.com. If you read it and disagree, post your link.


44stabilfr·wdophcgmx
BadgeMetricValueStatusDescription
[s]Reviewed Sources13%○≥80% from editorially reviewed sources
[t]Trusted75%○≥80% from verified, high-quality sources
[a]DOI25%○≥80% have a Digital Object Identifier
[b]CrossRef0%○≥80% indexed in CrossRef
[i]Indexed75%○≥80% have metadata indexed
[l]Academic38%○≥80% from journals/conferences/preprints
[f]Free Access63%○≥80% are freely accessible
[r]References8 refs○Minimum 10 references required
[w]Words [REQ]1,491✗Minimum 2,000 words for a full research article. Current: 1,491
[d]DOI [REQ]✓✓Zenodo DOI registered for persistent citation. DOI: 10.5281/zenodo.18968176
[o]ORCID [REQ]✓✓Author ORCID verified for academic identity
[p]Peer Reviewed [REQ]—✗Peer reviewed by an assigned reviewer
[h]Freshness [REQ]56%✗≥80% of references from 2025–2026. Current: 56%
[c]Data Charts0○Original data charts from reproducible analysis (min 2). Current: 0
[g]Code—○Source code available on GitHub
[m]Diagrams0○Mermaid architecture/flow diagrams. Current: 0
[x]Cited by0○Referenced by 0 other hub article(s)
Score = Ref Trust (54 × 60%) + Required (2/5 × 30%) + Optional (0/4 × 10%)

References #

  1. Ivchenko, O. (2026). AI Infrastructure Investment ROI — The Capex War Winners and Losers. Stabilarity Research Hub. https://doi.org/10.5281/zenodo.18821329[2]
  2. sergeykleftzovfor (2026). The OpenAI Oracle and Softbank Coalition Is Starting to Crumble. Medium / Predict. https://medium.com/predict/…[1]
  3. Futurum Research (2026). AI Capex 2026: The $690B Infrastructure Sprint. https://futurumgroup.com/insights/ai-capex-2026-the-690b-infrastructure-sprint/[3]
  4. AI Barcelona Review (2026). The Inference-Time Revolution: Beyond Scaling Laws to the Era of System 2 Reasoning. https://www.aibarcelona.org/2026/01/…[5]
  5. arXiv:2506.04301 — The Cost of Dynamic Reasoning: Demystifying AI Agents and Test-Time Scaling from an AI Infrastructure Perspective (2025). https://arxiv.org/html/2506.04301v2[4]
  6. arXiv:2501.02156 — The Race to Efficiency: A New Perspective on AI Scaling Laws (2025). https://arxiv.org/abs/2501.02156[6]

Cite this article: Ivchenko, O. (2026). When Your Research Gets Cited on Medium: A Clarification, a Thank You, and Why AGI Is Closer Than the Pessimists Think. Stabilarity Research Hub. https://doi.org/10.5281/zenodo.18968176[7]

DOI: 10.5281/zenodo.18968176[7]ORCID
56% fresh refs · 8 references

References (7) #

  1. Rate limited or blocked (403). medium.com. b
  2. Stabilarity Research Hub. (2026). AI Infrastructure Investment ROI — The Capex War Winners and Losers. doi.org. dtir
  3. (2026). AI Capex 2026: The $690B Infrastructure Sprint – Futurum. futurumgroup.com. rtil
  4. (20or). The Cost of Dynamic Reasoning: Demystifying AI Agents and Test-Time Scaling from an AI Infrastructure Perspective. arxiv.org. tii
  5. (2026). The Inference-Time Revolution: Beyond Scaling Laws to the Era of System 2 Reasoning. aibarcelona.org. a
  6. (20or). [2501.02156] The Race to Efficiency: A New Perspective on AI Scaling Laws. arxiv.org. tii
  7. Stabilarity Research Hub. (2026). When Your Research Gets Cited on Medium: A Clarification, a Thank You, and Why AGI Is Closer Than the Pessimists Think. doi.org. dtir
← Previous
Agent Auditor — Part 3: Career Landscape & Market Forecast
Next →
The Confidence Gate Theorem: A Framework That Promises More Than It Proves
All Future of AI articles (22)18 / 22
Version History · 1 revisions
+
RevDateStatusActionBySize
v0Mar 11, 2026CURRENTFirst publishedAuthor10092 (+10092)

Versioning is automatic. Each revision reflects editorial updates, reference validation, or formatting changes.

Recent Posts

  • Comparative Benchmarking: HPF-P vs Traditional Portfolio Methods
  • The Future of Intelligence Measurement: A 10-Year Projection
  • All-You-Can-Eat Agentic AI: The Economics of Unlimited Licensing in an Era of Non-Deterministic Costs
  • The Future of AI Memory — From Fixed Windows to Persistent State
  • FLAI & GROMUS Mathematical Glossary: Complete Variable Reference for Social Media Trend Prediction Models

Research Index

Browse all articles — filter by score, badges, views, series →

Categories

  • ai
  • AI Economics
  • AI Memory
  • AI Observability & Monitoring
  • AI Portfolio Optimisation
  • Ancient IT History
  • Anticipatory Intelligence
  • Article Quality Science
  • Capability-Adoption Gap
  • Cost-Effective Enterprise AI
  • Future of AI
  • Geopolitical Risk Intelligence
  • hackathon
  • healthcare
  • HPF-P Framework
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Open Humanoid
  • Research
  • ScanLab
  • Shadow Economy Dynamics
  • Spec-Driven AI Development
  • Technology
  • Trusted Open Source
  • Uncategorized
  • Universal Intelligence Benchmark
  • War Prediction

About

Stabilarity Research Hub is dedicated to advancing the frontiers of AI, from Medical ML to Anticipatory Intelligence. Our mission is to build robust and efficient AI systems for a safer future.

Language

  • Medical ML Diagnosis
  • AI Economics
  • Cost-Effective AI
  • Anticipatory Intelligence
  • Data Mining
  • 🔑 API for Researchers

Connect

Facebook Group: Join

Telegram: @Y0man

Email: contact@stabilarity.com

© 2026 Stabilarity Research Hub

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme
Stabilarity Research Hub

Open research platform for AI, machine learning, and enterprise technology. All articles are preprints with DOI registration via Zenodo.

185+
Articles
8
Series
DOI
Archived

Research Series

  • Medical ML Diagnosis
  • Anticipatory Intelligence
  • Intellectual Data Analysis
  • AI Economics
  • Cost-Effective AI
  • Spec-Driven AI

Community

  • Join Community
  • MedAI Hack
  • Zenodo Archive
  • Contact Us

Legal

  • Terms of Service
  • About Us
  • Contact
Operated by
Stabilarity OÜ
Registry: 17150040
Estonian Business Register →
© 2026 Stabilarity OÜ. Content licensed under CC BY 4.0
Terms About Contact
Language: 🇬🇧 EN 🇺🇦 UK 🇩🇪 DE 🇵🇱 PL 🇫🇷 FR
Display Settings
Theme
Light
Dark
Auto
Width
Default
Column
Wide
Text 100%

We use cookies to enhance your experience and analyze site traffic. By clicking "Accept All", you consent to our use of cookies. Read our Terms of Service for more information.