Abstract #
A personal commentary on an unexpected Medium citation of research on AI infrastructure ROI. Clarifying the nuance between measured economic analysis and pessimistic interpretations, with a reflection on AGI proximity and a thank you to the author who sparked the conversation.
What He Said, and What I Actually Wrote #
It started the way most good surprises do: quietly. I opened my laptop on a Tuesday morning, ran through the usual routine of notifications and RSS, and found a link to a Medium article that had cited my paper on AI infrastructure ROI. Someone I had never met — writing under the handle sergeykleftzovfor — had read my analysis of the capex war in AI infrastructure and built an entire argument around it.
That feeling is hard to describe if you have not experienced it. You write something, you publish it, you share it in the usual places. And then the internet takes it somewhere you did not plan. Someone reads it in a different context, with different priors, and draws a different conclusion. Science, in miniature.
The Medium article[1] — published in the Predict publication under the title “The OpenAI Oracle and Softbank Coalition Is Starting to Crumble” — makes a striking claim. The author argues: “There are no objective grounds for expecting a return on the colossal investments in AI infrastructure. There are only unsubstantiated promises to create AGI that will supposedly change everything.”
That is a strong conclusion. It is also not what my paper said.
In my research on AI infrastructure investment ROI (published March 1, 2026, DOI: 10.5281/zenodo.18821329[2]), I wrote that “current trajectories suggest a multi-year digestion period where infrastructure operators compete intensely for workloads.” Those are different claims. One is about fundamental viability — whether returns are possible at all. The other is about timing and competitive structure — when returns materialize and how the competition for them unfolds.
I want to be direct: neither reading is wrong. The pessimistic interpretation and my measured one are drawing on different signals. The Medium author is looking at announcement-driven hype, coalition instability, and the gap between declared investment and proven revenue. I am looking at utilization rate curves, workload migration patterns, and the historical rhythm of platform-shift economics. We are both watching the same market. We are just standing in different places.
What “Multi-Year Digestion Period” Actually Means #
When I wrote about a multi-year digestion period, I was not hedging. I was describing a well-documented economic pattern.
Every major platform shift produces the same shape: aggressive infrastructure build-out driven by anticipated demand, followed by a period where utilization catches up with capacity. This happened with internet infrastructure in the late 1990s. It happened with cloud computing between 2010 and 2015. It happened with mobile networks. The investment precedes the workload — because you cannot attract the workload without the infrastructure already being in place. That is the structural reality.
What makes the current AI infrastructure wave interesting is the speed and concentration of the capex. Analysts at Futurum Research estimated in early 2026[3] that US hyperscalers alone were committed to a $690 billion infrastructure sprint — a figure that does not include sovereign AI initiatives or second-tier cloud operators. When capital concentrates that rapidly, the digestion period becomes more intense, not less likely. Infrastructure operators compete more aggressively for every workload. Pricing pressure is real. Margin compression is real.
But margin compression is not the same as no return. It is the same economic mechanism that made cloud computing extraordinarily profitable for dominant players while punishing the laggards. The digestion period filters the field. The winners are not the ones who spent the most — they are the ones who converted infrastructure into utilization most efficiently.
A late 2025 analysis of test-time scaling economics (arXiv:2506.04301[4]) framed this precisely: the shift toward reasoning-heavy inference workloads changes the utilization calculus in ways that favor operators who invested early in specialized compute rather than commodity GPU clusters. The infrastructure war is not just about building capacity. It is about building the right capacity for the next generation of workloads — and those workloads are arriving on a different curve than the ones that drove the original investment thesis.
Research published in January 2026[5] on inference-time scaling and System 2 reasoning models makes the point even more sharply: the bottleneck has shifted from acquiring compute to allocating it intelligently. That is not a death knell for infrastructure investment. It is a signal that the competitive moat is moving up the stack — from raw silicon to the software and model architecture layer above it.
On AGI, and Why I Remain Optimistic #
The Medium article’s most provocative claim is that there are only “unsubstantiated promises to create AGI.” I understand the frustration behind that framing. The announcement cycle around AI has been breathless, and the gap between capability claims and demonstrated economic value has been real.
But I think this misreads the trajectory.
Scaling laws have not broken. What has changed is where the scaling is happening. Pretraining compute scaling is running into diminishing returns on certain benchmarks — that much is real. But test-time compute scaling, the ability to extend reasoning chains and use more inference compute to improve output quality, is operating on a different curve entirely. Reasoning models represent a qualitatively different approach to capability development. They are not just bigger versions of the previous generation.
The instability in large coalition investment announcements — the crumbling, as the Medium headline calls it — does not signal that AGI is further away. It may signal the opposite: that we are approaching the phase where the capability gains do not require $500 billion data centers. Research from 2025 on AI scaling laws and efficiency trajectories[6] suggests that the path to higher capability increasingly runs through algorithmic efficiency rather than raw compute multiplication. A world where AGI-class reasoning emerges from more efficient architectures is a world where the infrastructure supercycle was a precondition, not a permanent prerequisite for every future advance.
I am optimistic about AGI proximity not because of hype, but because the underlying capability curves have not reversed. The economic turbulence around infrastructure investment is real. The coalition instability is real. Neither of those things changes what is happening inside the models.
To sergeykleftzovfor: Thank You #
I want to say something directly to the author of the Medium piece.
You read my paper carefully enough to cite it specifically. You used it as evidence for an argument I would not fully endorse — but you engaged with the substance, not the headline. That is more than most readers do.
These cross-platform citations, these moments where a piece of academic research lands in a general publication and gets reinterpreted, are how research actually spreads. Not through the formal citation networks of academic journals alone. Through someone on Medium reading a paper and arguing with it. Through a reader in a different field picking up a reference and following it somewhere the original author did not expect.
That is peer review in the wild. It is imperfect and it is loud and it draws sharper conclusions than the data strictly support — and it is enormously valuable. The Medium reader who clicked through to my paper because of your article is a reader I would not have reached otherwise.
So: thank you. Genuinely. Cite me again. Disagree with me again. That is how this works.
Research Lives When It Gets Challenged #
I will close with a thought about what it means for a piece of research to survive.
A paper that sits unchallenged in a repository is not alive. It is preserved. A paper that gets cited, misread, argued with, extended, and contradicted is doing what research is supposed to do. It is generating conversation. It is forcing clarification. It is making someone, somewhere, think more carefully about a question that matters.
The AI infrastructure question matters enormously. The stakes — economic, technical, geopolitical — are not abstract. Whether the current infrastructure investment cycle produces returns, and on what timeline, and for whom, shapes how the next decade of AI development unfolds. People should be arguing about this. Loudly, in public, with citations.
My paper is at hub.stabilarity.com. If you read it and disagree, post your link.
| Badge | Metric | Value | Status | Description |
|---|---|---|---|---|
| [s] | Reviewed Sources | 13% | ○ | ≥80% from editorially reviewed sources |
| [t] | Trusted | 75% | ○ | ≥80% from verified, high-quality sources |
| [a] | DOI | 25% | ○ | ≥80% have a Digital Object Identifier |
| [b] | CrossRef | 0% | ○ | ≥80% indexed in CrossRef |
| [i] | Indexed | 75% | ○ | ≥80% have metadata indexed |
| [l] | Academic | 38% | ○ | ≥80% from journals/conferences/preprints |
| [f] | Free Access | 63% | ○ | ≥80% are freely accessible |
| [r] | References | 8 refs | ○ | Minimum 10 references required |
| [w] | Words [REQ] | 1,491 | ✗ | Minimum 2,000 words for a full research article. Current: 1,491 |
| [d] | DOI [REQ] | ✓ | ✓ | Zenodo DOI registered for persistent citation. DOI: 10.5281/zenodo.18968176 |
| [o] | ORCID [REQ] | ✓ | ✓ | Author ORCID verified for academic identity |
| [p] | Peer Reviewed [REQ] | — | ✗ | Peer reviewed by an assigned reviewer |
| [h] | Freshness [REQ] | 56% | ✗ | ≥80% of references from 2025–2026. Current: 56% |
| [c] | Data Charts | 0 | ○ | Original data charts from reproducible analysis (min 2). Current: 0 |
| [g] | Code | — | ○ | Source code available on GitHub |
| [m] | Diagrams | 0 | ○ | Mermaid architecture/flow diagrams. Current: 0 |
| [x] | Cited by | 0 | ○ | Referenced by 0 other hub article(s) |
References #
- Ivchenko, O. (2026). AI Infrastructure Investment ROI — The Capex War Winners and Losers. Stabilarity Research Hub. https://doi.org/10.5281/zenodo.18821329[2]
- sergeykleftzovfor (2026). The OpenAI Oracle and Softbank Coalition Is Starting to Crumble. Medium / Predict. https://medium.com/predict/…[1]
- Futurum Research (2026). AI Capex 2026: The $690B Infrastructure Sprint. https://futurumgroup.com/insights/ai-capex-2026-the-690b-infrastructure-sprint/[3]
- AI Barcelona Review (2026). The Inference-Time Revolution: Beyond Scaling Laws to the Era of System 2 Reasoning. https://www.aibarcelona.org/2026/01/…[5]
- arXiv:2506.04301 — The Cost of Dynamic Reasoning: Demystifying AI Agents and Test-Time Scaling from an AI Infrastructure Perspective (2025). https://arxiv.org/html/2506.04301v2[4]
- arXiv:2501.02156 — The Race to Efficiency: A New Perspective on AI Scaling Laws (2025). https://arxiv.org/abs/2501.02156[6]
Cite this article: Ivchenko, O. (2026). When Your Research Gets Cited on Medium: A Clarification, a Thank You, and Why AGI Is Closer Than the Pessimists Think. Stabilarity Research Hub. https://doi.org/10.5281/zenodo.18968176[7]
References (7) #
- Rate limited or blocked (403). medium.com. b
- Stabilarity Research Hub. (2026). AI Infrastructure Investment ROI — The Capex War Winners and Losers. doi.org. dtir
- (2026). AI Capex 2026: The $690B Infrastructure Sprint – Futurum. futurumgroup.com. rtil
- (20or). The Cost of Dynamic Reasoning: Demystifying AI Agents and Test-Time Scaling from an AI Infrastructure Perspective. arxiv.org. tii
- (2026). The Inference-Time Revolution: Beyond Scaling Laws to the Era of System 2 Reasoning. aibarcelona.org. a
- (20or). [2501.02156] The Race to Efficiency: A New Perspective on AI Scaling Laws. arxiv.org. tii
- Stabilarity Research Hub. (2026). When Your Research Gets Cited on Medium: A Clarification, a Thank You, and Why AGI Is Closer Than the Pessimists Think. doi.org. dtir