Stabilarity Research Hub · Part 2 of Series
The Planning Illusion
We're teaching AI to plan like humans. That might be the most expensive architectural mistake in AI history.
Common Assumption
Human planning frameworks are natural fits for AI agents
ReAct, Chain-of-Thought, Plan-and-Execute — dominant frameworks mirror human deliberate problem-solving. If it works for humans, it must be right for agents.
  • ReAct: think → act → observe (mirrors human deliberation)
  • Chain-of-Thought: "thinking out loud" like a human
  • Plan-and-Execute: manager briefs subordinate (human org structure)
  • Tree-of-Thoughts: branching like human mental simulation
The Actual Problem
Human planning frameworks impose constraints AI doesn't actually have
Human planning is sequential because of cognitive bottlenecks and limited working memory. AI has none of these. We're building a cage around capabilities that could operate entirely differently.
  • AI has no working memory limit — sequential steps are unnecessary scaffolding
  • CoT is performance theater for interpretability, not actual reasoning
  • Parallel hypothesis evaluation is natural for AI, forbidden by these frameworks
  • We're optimizing for human legibility over machine capability

"Human planning frameworks impose the cognitive constraints of biological minds onto systems that have no reason to share them. We call this 'structured reasoning.' It might simply be an unnecessary cage."

↗ Tweetable · Ivchenko, 2026