Common Assumption
Human planning frameworks are natural fits for AI agents
ReAct, Chain-of-Thought, Plan-and-Execute — dominant frameworks mirror human deliberate problem-solving. If it works for humans, it must be right for agents.
- ReAct: think → act → observe (mirrors human deliberation)
- Chain-of-Thought: "thinking out loud" like a human
- Plan-and-Execute: manager briefs subordinate (human org structure)
- Tree-of-Thoughts: branching like human mental simulation
The Actual Problem
Human planning frameworks impose constraints AI doesn't actually have
Human planning is sequential because of cognitive bottlenecks and limited working memory. AI has none of these. We're building a cage around capabilities that could operate entirely differently.
- AI has no working memory limit — sequential steps are unnecessary scaffolding
- CoT is performance theater for interpretability, not actual reasoning
- Parallel hypothesis evaluation is natural for AI, forbidden by these frameworks
- We're optimizing for human legibility over machine capability