1 Odesa National Polytechnic University (ONPU)
- Type
- Academic Research Research Series
- Status
- Ongoing · 2 articles · 2026–
- Tool
- Stabilarity Research Hub
Every major AI system built today shares a hidden architectural assumption: that intelligence should look, think, and plan like a human. This research series systematically questions that assumption — not to dismiss what we have built, but to understand what we might be giving up by defaulting to the mirror. From anthropomorphic bias baked into training data to human planning frameworks imposed on AI agents, this series traces how our own cognitive architecture shapes and potentially limits what we build. The investigation examines the civilizational choices embedded in our design decisions and explores what genuinely non-human AI might look like.
Idea and Motivation
Human-like AI has become the default path in the field. Foundation models are trained on human text, shaped by human feedback, and evaluated against human preferences. Yet this choice was never explicitly made — it emerged from economic incentives, training data availability, and alignment shortcutting. We build AI in front of mirrors, and the mirror is becoming indistinguishable from the architecture.
This series asks a straightforward but rarely articulated question: what if we could build something genuinely alien? Not alien in the science-fiction sense, but in the technical sense — a mind operating on non-human principles, unconstrained by the cognitive architecture evolution handed us. The motivation is not philosophical. It is practical: understanding the cost of our anthropomorphic default might reveal capabilities we are systematically foregoing.
Goal
The series aims to construct a rigorous examination of the assumptions embedded in contemporary AI design. This means analyzing how human cognitive constraints have become architectural constraints in AI systems, documenting the specific ways we encode human reasoning patterns into learning algorithms, and exploring what alternative architectural paths might look like. The goal is not to prescribe a single direction, but to make visible the choices currently being made invisibly.
The research corpus should serve researchers, engineers, and policy-makers who want to understand not just what AI does, but what cognitive commitments are hidden in how we build it.
Scope
The series covers fundamental questions about AI architecture and cognitive design across multiple domains of inquiry:
| Area | Key Questions |
|---|---|
| Training & Data | How does human-generated training data encode cognitive biases? What alternative training regimes might reduce anthropomorphic constraints? |
| Planning & Reasoning | Why do we impose sequential chain-of-thought reasoning on systems with no memory limits? What would parallel or non-narrative reasoning look like? |
| Evaluation & Benchmarking | How do standard benchmarks (MMLU, HumanEval) measure human cognitive resemblance rather than capability? What would non-anthropomorphic evaluation criteria entail? |
| Memory & Knowledge | Human memory is episodic and lossy. AI memory architectures could be radically different. Why do we keep building RAG systems that mirror human forgetting? |
| Alignment & Values | If we build genuinely non-human AI, what does alignment mean? Whose values should an alien intelligence optimize for? |
Focus
The primary technical focus is on architectural decisions in contemporary AI systems that encode human cognitive assumptions. This includes analysis of transformer attention mechanisms as they relate to human sequential attention, the role of narrative structure in training data, the implicit cost-benefit analyses of different reasoning modes (sequential deliberation versus parallel inference), and the design of evaluation metrics that privilege human-recognizable intelligence over other forms of capability.
The series treats anthropomorphism not as a moral failing but as a design choice with real technical consequences. The goal is to map those consequences and explore what design alternatives might unlock.
Limitations
Scientific Value
The series makes three core contributions. First, it renders explicit the usually-invisible design choices in contemporary AI systems, documenting how human cognitive architecture has become embedded in technical decisions. Second, it maps the landscape of architectural alternatives, providing a taxonomy of non-anthropomorphic design strategies that researchers can evaluate. Third, it establishes a framework for asking which cognitive assumptions are load-bearing (necessary for safety, alignment, or performance) and which are merely defaults that could be changed.
The work is situated at the intersection of cognitive science, systems architecture, and AI design philosophy. It aims to inform decisions that will shape AI development for the next decade.
Resources
- Stabilarity Research Hub→
- Author ORCID: 0000-0002-9540-1637→
- Zenodo Collection→
- Series DOI: 10.5281/zenodo.18824650→
Status
Ongoing. Two articles published (March 2026). The investigation continues with planned installments examining evaluation metrics, memory architectures, and the alignment implications of non-anthropomorphic AI. This is an open research agenda; contributions and extensions are welcomed.
Contribution Opportunities
Researchers, engineers, and thinkers interested in advancing this work are encouraged to engage in the following directions:
- Empirical testing: Design and implement experimental AI systems that deliberately relax anthropomorphic constraints. Measure performance, safety, and interpretability against baseline human-like systems.
- Benchmark construction: Develop evaluation metrics that measure capabilities orthogonal to human cognitive resemblance. What would genuinely non-anthropomorphic evaluation look like?
- Training regime innovation: Experiment with training data sources and reinforcement learning feedback mechanisms that do not rely on human cognitive patterns.
- Architectural alternatives: Design and analyze AI architectures that do not enforce sequential reasoning, human-style attention, or narrative structure.
- Safety & alignment research: Investigate what alignment and safety mean for AI systems that do not share human values or reasoning structures by default.
- Policy implications: Engage with regulators and policy-makers to explore how architectural choices shape long-term AI governance and safety strategies.