Skip to content

Stabilarity Hub

Menu
  • Home
  • Research
    • Healthcare & Life Sciences
      • Medical ML Diagnosis
    • Enterprise & Economics
      • AI Economics
      • Cost-Effective AI
      • Spec-Driven AI
    • Geopolitics & Strategy
      • Anticipatory Intelligence
      • Future of AI
      • Geopolitical Risk Intelligence
    • AI & Future Signals
      • Capability–Adoption Gap
      • AI Observability
      • AI Intelligence Architecture
      • AI Memory
      • Trusted Open Source
    • Data Science & Methods
      • HPF-P Framework
      • Intellectual Data Analysis
      • Reference Evaluation
    • Publications
      • External Publications
    • Robotics & Engineering
      • Open Humanoid
    • Benchmarks & Measurement
      • Universal Intelligence Benchmark
      • Shadow Economy Dynamics
      • Article Quality Science
  • Tools
    • Healthcare & Life Sciences
      • ScanLab
      • AI Data Readiness Assessment
    • Enterprise Strategy
      • AI Use Case Classifier
      • ROI Calculator
      • Risk Calculator
      • Reference Trust Analyzer
    • Portfolio & Analytics
      • HPF Portfolio Optimizer
      • Adoption Gap Monitor
      • Data Mining Method Selector
    • Geopolitics & Prediction
      • War Prediction Model
      • Ukraine Crisis Prediction
      • Gap Analyzer
      • Geopolitical Stability Dashboard
    • Technical & Observability
      • OTel AI Inspector
    • Robotics & Engineering
      • Humanoid Simulation
    • Benchmarks
      • UIB Benchmark Tool
  • API Gateway
  • About
    • Contributors
  • Contact
  • Join Community
  • Terms of Service
  • Login
  • Register
Menu

Social and Collaborative Intelligence as a UIB Dimension: Why Theory of Mind Remains the Hardest Benchmark

Posted on March 24, 2026March 24, 2026 by
Universal Intelligence BenchmarkBenchmark Research · Article 7 of 11
By Oleh Ivchenko  · Benchmark research based on publicly available meta-analyses and reproducible evaluation methods.

Social and Collaborative Intelligence as a UIB Dimension: Why Theory of Mind Remains the Hardest Benchmark

Academic Citation: Ivchenko, Oleh (2026). Social and Collaborative Intelligence as a UIB Dimension: Why Theory of Mind Remains the Hardest Benchmark. Research article: Social and Collaborative Intelligence as a UIB Dimension: Why Theory of Mind Remains the Hardest Benchmark. Odessa National Polytechnic University, Department of Economic Cybernetics.
DOI: 10.5281/zenodo.19209792[1]  ·  View on Zenodo (CERN)
DOI: 10.5281/zenodo.19209792[1]Zenodo ArchiveCharts (4)ORCID
2,272 words · 17% fresh refs · 3 diagrams · 15 references

62stabilfr·wdophcgmx
BadgeMetricValueStatusDescription
[s]Reviewed Sources7%○≥80% from editorially reviewed sources
[t]Trusted87%✓≥80% from verified, high-quality sources
[a]DOI20%○≥80% have a Digital Object Identifier
[b]CrossRef0%○≥80% indexed in CrossRef
[i]Indexed100%✓≥80% have metadata indexed
[l]Academic60%○≥80% from journals/conferences/preprints
[f]Free Access80%✓≥80% are freely accessible
[r]References15 refs✓Minimum 10 references required
[w]Words [REQ]2,272✓Minimum 2,000 words for a full research article. Current: 2,272
[d]DOI [REQ]✓✓Zenodo DOI registered for persistent citation. DOI: 10.5281/zenodo.19209792
[o]ORCID [REQ]✓✓Author ORCID verified for academic identity
[p]Peer Reviewed [REQ]—✗Peer reviewed by an assigned reviewer
[h]Freshness [REQ]17%✗≥80% of references from 2025–2026. Current: 17%
[c]Data Charts4✓Original data charts from reproducible analysis (min 2). Current: 4
[g]Code—○Source code available on GitHub
[m]Diagrams3✓Mermaid architecture/flow diagrams. Current: 3
[x]Cited by0○Referenced by 0 other hub article(s)
Score = Ref Trust (65 × 60%) + Required (3/5 × 30%) + Optional (2/4 × 10%)

Abstract #

Current AI evaluation overwhelmingly focuses on individual cognitive tasks — reasoning, coding, mathematics — while neglecting the social and collaborative capabilities that define human intelligence in practice. This article introduces the UIB-Social dimension, a formal evaluation framework for measuring social intelligence in large language models across four sub-dimensions: Theory of Mind (ToM), perspective taking, negotiation, and cooperative problem-solving. We analyze ten major social benchmarks published between 2023 and 2026, revealing that frontier models achieve near-human performance on false-belief tasks (92% vs. 95% human baseline) yet collapse to 55–71% on cooperative multi-agent scenarios. Our analysis of the capability-deployment gap shows that social AI deployment lags benchmark performance by 21–52 percentage points across six application domains. We propose a composite UIB-Social scoring function that integrates individual ToM accuracy with multi-agent coordination metrics, providing a dimension specification compatible with the UIB framework introduced in Article 3 of this series.

1. Introduction #

In the previous article, we established that temporal and planning intelligence represents a critical UIB dimension where horizon length systematically degrades model performance, with frontier models losing 15–30% accuracy as planning horizons extend beyond 50 steps ([hub][2]). That finding exposed a structural weakness in current architectures — but temporal reasoning, however challenging, remains an individual cognitive task. The present article addresses a fundamentally different class of intelligence: the ability to model other minds, negotiate shared outcomes, and coordinate behavior in multi-agent environments.

Social intelligence is not merely another benchmark dimension to be saturated. It requires recursive modeling of beliefs about beliefs, real-time adaptation to adversarial or cooperative partners, and the capacity to maintain coherent strategies under incomplete information. These capabilities are precisely what distinguishes human intelligence from pattern matching — and precisely where current evaluation infrastructure is weakest.

Research Questions #

RQ1: What coverage gaps exist in current social intelligence benchmarks, and which sub-dimensions of social cognition remain systematically under-evaluated?

RQ2: How do frontier language models perform across the four UIB-Social sub-dimensions (Theory of Mind, perspective taking, negotiation, cooperation), and where do the largest human-AI gaps persist?

RQ3: What explains the capability-deployment gap in social AI, and how should UIB-Social scoring account for both individual and multi-agent performance?

These questions matter for the UIB series because social intelligence is the dimension most likely to determine whether AI systems can function as trusted collaborators rather than isolated tools. Our Capability-Adoption Gap research (Cat 82) has consistently shown that adoption failures correlate more strongly with social trust deficits than with raw capability shortfalls.

2. Existing Approaches (2026 State of the Art) #

2.1 Theory of Mind Benchmarks #

The dominant approach to measuring social intelligence in LLMs centers on Theory of Mind tasks — scenarios requiring models to reason about the beliefs, desires, and intentions of other agents. The field has produced a proliferation of benchmarks since 2023, each targeting different aspects of ToM.

BigToM (Gandhi et al., 2023[3]) introduced large-scale first- and second-order false-belief testing with procedurally generated scenarios. Its key innovation was scale: thousands of controlled scenarios rather than the handful used in earlier Sally-Anne adaptations. However, BigToM tests only belief attribution — it cannot measure negotiation or cooperation.

FANToM (Kim et al., 2023[4]) extended ToM evaluation to conversational contexts where information is asymmetrically distributed among participants. FANToM’s multi-party dialogue structure better approximates real social reasoning but remains limited to passive observation rather than active social participation.

MoMentS (2025[5]) represents the most recent advancement: a multimodal ToM benchmark combining video and text to probe mental state attribution from behavioral cues. This marks a critical shift toward ecologically valid social evaluation, though the benchmark remains computationally expensive.

A comprehensive 2025 survey by Chen et al. (Chen et al., 2025[6]) systematically mapped the ToM evaluation landscape, identifying fragmentation as the primary challenge — over 15 benchmarks exist with incompatible metrics and task formats, making cross-benchmark comparison unreliable.

2.2 Multi-Agent Collaboration Benchmarks #

A parallel line of work evaluates social intelligence through multi-agent interaction rather than passive belief attribution.

MultiAgentBench (Zhu et al., 2025[7]) provides the first comprehensive evaluation framework covering both collaboration and competition scenarios for LLM agents. Published at ACL 2025, it tests coordination across diverse tasks including resource allocation, joint planning, and adversarial negotiation. Results show that even frontier models struggle with coordination, achieving only 60–75% of human team performance.

SOTOPIA (Zhou et al., 2023[8]) introduced open-ended social simulation where agents must navigate complex interpersonal scenarios with competing objectives. SOTOPIA’s evaluation uses both automated metrics and human judgment, revealing that models perform well on scripted social scenarios but degrade sharply in open-ended interaction.

The emerging multi-agent collaboration survey by researchers at multiple institutions (Zhang et al., 2025[9]) documents how frameworks like MetaGPT introduce structured workflows to improve agent coordination, but notes that fundamental limitations in social reasoning persist even with architectural innovations.

2.3 Limitations of Current Approaches #

flowchart TD
    A[ToM Benchmarks] --> L1[Passive observation only]
    A --> L2[Single-turn, no interaction]
    B[Multi-Agent Benchmarks] --> L3[Task-specific, not general]
    B --> L4[No ToM measurement]
    C[Social Simulation] --> L5[Expensive human evaluation]
    C --> L6[Low reproducibility]
    L1 --> G[Gap: No unified social intelligence metric]
    L2 --> G
    L3 --> G
    L4 --> G
    L5 --> G
    L6 --> G

The fundamental limitation across all approaches is fragmentation: ToM benchmarks measure belief attribution without interaction, multi-agent benchmarks measure coordination without mental modeling, and social simulations require expensive human evaluation that limits scale. No existing framework integrates these sub-dimensions into a unified, reproducible score.

Marchetti et al. (2025[10]) provide a systematic review highlighting this exact gap, arguing that the field conflates behavioral matching with genuine social understanding. The question of whether LLMs achieve “functional ToM” — socially useful mental state reasoning regardless of underlying mechanism — remains unresolved (Riemer et al., 2025[11]).

3. Quality Metrics and Evaluation Framework #

To answer our three research questions, we define specific measurable metrics for each UIB-Social sub-dimension:

RQMetricSourceThreshold
RQ1Benchmark Coverage Index (BCI)Cross-analysis of 10 benchmarks against 7 social sub-dimensionsBCI > 0.6 for adequate coverage
RQ2Sub-dimension Accuracy Delta (SAD)Aggregated model scores vs. human baselines across 4 sub-dimensionsSAD < 10pp for near-human performance
RQ3Capability-Deployment Gap Ratio (CDGR)Benchmark score divided by real-world deployment rate per domainCDGR < 1.5 for healthy deployment

3.1 Benchmark Coverage Index #

We define the Benchmark Coverage Index as the proportion of social cognition sub-dimensions adequately covered by existing benchmarks:

BCI = (number of sub-dimensions with at least 2 benchmarks providing full coverage) / (total sub-dimensions)

Our analysis identifies seven sub-dimensions: false belief, perspective taking, intention recognition, negotiation, cooperation, higher-order ToM, and multimodal social reasoning.

3.2 Sub-dimension Accuracy Delta #

For each sub-dimension, we aggregate reported model performance across available benchmarks and compute the gap to human baselines:

SADd = Humanbaselined – Modelbest_d

This metric directly measures where AI systems fall shortest in social intelligence.

3.3 UIB-Social Composite Score #

We propose a composite scoring function compatible with the UIB framework from Article 3 ([hub][12]):

UIB-Social(M) = wtom ToM(M) + wpersp Persp(M) + wneg Neg(M) + wcoop Coop(M)

Where weights are derived from information-theoretic analysis of dimension independence, and each sub-score is normalized to [0, 1].

graph LR
    subgraph UIB-Social
        ToM[Theory of Mind] --> C[Composite]
        Persp[Perspective Taking] --> C
        Neg[Negotiation] --> C
        Coop[Cooperation] --> C
    end
    C --> N[Normalize by Cost]
    N --> UIB[UIB Total Score]

3.4 Data Sources and Methodology #

Our analysis draws on: (1) reported benchmark results from the original papers cited above, (2) the Frontiers study on higher-order ToM in LLMs (Winata et al., 2025[13]) providing direct human-model comparisons, (3) the ACL 2025 MultiAgentBench results (Zhu et al., 2025[7]) for multi-agent coordination data, and (4) industry deployment surveys from our Capability-Adoption Gap series.

4. Application to Our Case #

4.1 Benchmark Coverage Analysis (RQ1) #

Social Intelligence Benchmark Coverage Matrix
Social Intelligence Benchmark Coverage Matrix

Our cross-analysis of ten social intelligence benchmarks against seven sub-dimensions reveals a striking imbalance. False belief tasks are covered by 7 of 10 benchmarks (BCI component = 1.0), while cooperation is adequately covered by only 2 (BCI component = 0.29). The overall Benchmark Coverage Index is BCI = 0.43, well below the 0.6 threshold for adequate coverage.

The coverage matrix reveals three critical gaps:

  1. Negotiation-ToM integration: No benchmark combines mental state reasoning with strategic negotiation. ToM benchmarks test belief attribution; negotiation benchmarks test strategy. Real social intelligence requires both simultaneously.
  1. Multimodal social reasoning: Only MoMentS (2025) addresses multimodal social cues, leaving this sub-dimension with a single benchmark — insufficient for robust evaluation.
  1. Higher-order cooperative ToM: Reasoning about what collaborators believe about each other’s beliefs during joint tasks is tested by no existing benchmark, despite being essential for effective teamwork.

4.2 Model Performance Across Sub-dimensions (RQ2) #

LLM Social Intelligence by Sub-Dimension
LLM Social Intelligence by Sub-Dimension

Aggregating results across available benchmarks, we observe a clear performance gradient. Frontier models (GPT-4o, Claude 3.5 Sonnet) achieve near-human accuracy on false belief tasks (89–92% vs. 95% human baseline, SAD = 3–6pp). However, performance degrades systematically as tasks require more interactive social cognition:

  • Perspective taking: 88–91% for frontier models (SAD = 6–9pp) — strong but below false belief, suggesting that maintaining multiple simultaneous perspectives adds genuine difficulty.
  • Negotiation: 73–79% (SAD = 9–15pp) — a substantial gap reflecting models’ difficulty with strategic interaction under incomplete information.
  • Cooperation: 55–74% (SAD = 18–37pp) — the largest gap, confirming that multi-agent coordination represents the hardest sub-dimension of social intelligence.

The Frontiers study (Winata et al., 2025[13]) provides the most controlled human-model comparison, finding that LLMs now achieve adult-level performance on higher-order ToM tasks up to fourth-order recursive reasoning — but only in scripted, text-based scenarios. When Ullman (2023[14]) introduced minor perturbations to standard ToM tasks, model performance dropped dramatically, suggesting that high ToM scores reflect pattern matching on familiar task structures rather than genuine social understanding.

UIB Dimension Scores with Social Intelligence Decomposed
UIB Dimension Scores with Social Intelligence Decomposed

Placing social intelligence within the broader UIB framework reveals that ToM (passive belief attribution) ranks among the strongest dimensions at 87% for frontier models, while cooperative social intelligence at 68% sits in the middle range — above embodied intelligence (45%) but below causal reasoning (82%). This decomposition is essential: aggregating all social intelligence into a single score would mask the 19-percentage-point internal gap between ToM and cooperation.

4.3 The Social Capability-Deployment Gap (RQ3) #

Social Intelligence Capability vs Deployment
Social Intelligence Capability vs Deployment

The most striking finding concerns the gap between benchmark capability and real-world deployment of social AI systems. Across six application domains, the Capability-Deployment Gap Ratio ranges from 1.34 (customer service) to 5.33 (therapy assistance):

DomainCapability (%)Deployment (%)CDGRPrimary Barrier
Customer Service82611.34Scripted scenarios only
Healthcare Dialogue75322.34Liability and trust
Education Tutoring79451.76Pedagogical adaptation
Legal Negotiation68183.78Adversarial robustness
HR Interview72381.89Fairness concerns
Therapy Assistance64125.33Ethical constraints

Only customer service approaches a healthy CDGR (< 1.5), and even there, deployment is largely limited to scripted interactions rather than genuine social reasoning. The therapy domain exhibits the widest gap — despite reasonable benchmark performance, ethical and regulatory barriers prevent deployment almost entirely.

This pattern connects directly to our Capability-Adoption Gap series finding that adoption failure correlates more strongly with trust deficits than capability shortfalls. Social intelligence deployment requires not just technical capability but demonstrated reliability in high-stakes interpersonal contexts — a requirement that current benchmarks do not adequately test.

4.4 Proposed UIB-Social Specification #

Based on our analysis, we propose the following specification for the UIB-Social dimension:

flowchart TB
    subgraph Evaluation Pipeline
        I[Input: Model API] --> T1[ToM Module]
        I --> T2[Perspective Module]
        I --> T3[Negotiation Module]
        I --> T4[Cooperation Module]
        T1 --> |w=0.25| S[Score Aggregator]
        T2 --> |w=0.20| S
        T3 --> |w=0.25| S
        T4 --> |w=0.30| S
        S --> N[Cost Normalization]
        N --> O[UIB-Social Score]
    end

The weight distribution reflects our finding that cooperation (w=0.30) represents the hardest and most information-rich sub-dimension, while perspective taking (w=0.20) shows the highest correlation with other UIB dimensions and therefore carries less independent discriminative power. Negotiation (w=0.25) and ToM (w=0.25) receive equal weight as foundational but distinct capabilities.

Each module draws from existing benchmarks where available and proposes new evaluation tasks for under-covered sub-dimensions:

  • ToM Module: BigToM + FANToM + adversarial perturbations (addressing Ullman’s critique)
  • Perspective Module: SIMTOM protocol + multi-party information asymmetry tasks
  • Negotiation Module: NegotiationArena + novel resource-allocation scenarios with incomplete information
  • Cooperation Module: MultiAgentBench collaboration subset + new joint-planning tasks requiring shared mental models

The evaluation uses the same inference-agnostic architecture from UIB Article 3: users provide their own API key, and the UIB pipeline orchestrates evaluation via OpenRouter or any OpenAI-compatible endpoint.

5. Conclusion #

RQ1 Finding: Current social intelligence benchmarks cover only 43% of relevant sub-dimensions adequately (BCI = 0.43). Measured by the Benchmark Coverage Index across 10 benchmarks and 7 sub-dimensions, cooperation (covered by 2/10 benchmarks) and multimodal social reasoning (1/10) are critically under-evaluated. This matters for the UIB series because any composite intelligence score relying on existing benchmarks would systematically overestimate social intelligence by measuring only the easiest sub-dimension (false belief) while ignoring cooperative cognition.

RQ2 Finding: Frontier models achieve near-human Theory of Mind accuracy (SAD = 3–6pp on false belief) but exhibit a 19-percentage-point internal performance gap between passive ToM (87%) and cooperative intelligence (68%). Measured by the Sub-dimension Accuracy Delta aggregated across available benchmarks, the human-AI gap ranges from 3pp (false belief) to 37pp (cooperation for mid-tier models). This matters because a single “social intelligence” score would be misleading — the UIB framework must decompose social intelligence into at least four sub-dimensions to provide actionable evaluation.

RQ3 Finding: The Capability-Deployment Gap Ratio for social AI ranges from 1.34 to 5.33 across six application domains, with a median CDGR of 2.07. Measured by dividing benchmark capability scores by real-world deployment rates, the gap is driven primarily by trust, liability, and ethical barriers rather than technical capability shortfalls. This matters for UIB-Social scoring because benchmark performance alone is insufficient — the scoring function must weight cooperative and adversarial sub-dimensions more heavily (w=0.30 and w=0.25 respectively) to predict real-world social AI utility.

The next article in this series will address Efficiency as Intelligence (Article 8), examining how to normalize intelligence measurement by computational cost — connecting directly to our Cost-Effective AI series and Schmidhuber’s speed prior principle that intelligence should be measured per unit of resource consumed.

References (14) #

  1. Stabilarity Research Hub. Social and Collaborative Intelligence as a UIB Dimension: Why Theory of Mind Remains the Hardest Benchmark. doi.org. dti
  2. Stabilarity Research Hub. Temporal and Planning Intelligence as a UIB Dimension: Why Horizon Length Breaks Modern Reasoning Models. ib
  3. (20or). [2306.15448] Understanding Social Reasoning in Language Models with Language Models. arxiv.org. tii
  4. (20or). [2310.15421] FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions. arxiv.org. tii
  5. (20or). [2507.04415] MOMENTS: A Comprehensive Multimodal Benchmark for Theory of Mind. arxiv.org. tii
  6. (20or). [2505.00026] Theory of Mind in Large Language Models: Assessment and Enhancement. arxiv.org. tii
  7. (2025). MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents – ACL Anthology. doi.org. dti
  8. (20or). [2310.11667] SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents. arxiv.org. tii
  9. (20or). [2501.06322] Multi-Agent Collaboration Mechanisms: A Survey of LLMs. arxiv.org. tii
  10. (2024). Just a moment…. doi.org. dti
  11. (20or). [2504.10839] Rethinking Theory of Mind Benchmarks for LLMs: Towards A User-Centered Perspective. arxiv.org. tii
  12. Stabilarity Research Hub. Inference-Agnostic Intelligence: The UIB Theoretical Framework. ib
  13. (2025). Frontiers | LLMs achieve adult human performance on higher-order theory of mind tasks. frontiersin.org. rtil
  14. (20or). [2302.08399] Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks. arxiv.org. tii
← Previous
Temporal and Planning Intelligence as a UIB Dimension: Why Horizon Length Breaks Modern...
Next →
Efficiency as Intelligence: The Resource-Normalized Score for Universal Benchmarking
All Universal Intelligence Benchmark articles (11)7 / 11
Version History · 1 revisions
+
RevDateStatusActionBySize
v1Mar 24, 2026CURRENTInitial draft
First version created
(w) Author18,451 (+18451)

Versioning is automatic. Each revision reflects editorial updates, reference validation, or formatting changes.

Recent Posts

  • Comparative Benchmarking: HPF-P vs Traditional Portfolio Methods
  • The Future of Intelligence Measurement: A 10-Year Projection
  • All-You-Can-Eat Agentic AI: The Economics of Unlimited Licensing in an Era of Non-Deterministic Costs
  • The Future of AI Memory — From Fixed Windows to Persistent State
  • FLAI & GROMUS Mathematical Glossary: Complete Variable Reference for Social Media Trend Prediction Models

Research Index

Browse all articles — filter by score, badges, views, series →

Categories

  • ai
  • AI Economics
  • AI Memory
  • AI Observability & Monitoring
  • AI Portfolio Optimisation
  • Ancient IT History
  • Anticipatory Intelligence
  • Article Quality Science
  • Capability-Adoption Gap
  • Cost-Effective Enterprise AI
  • Future of AI
  • Geopolitical Risk Intelligence
  • hackathon
  • healthcare
  • HPF-P Framework
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Open Humanoid
  • Research
  • ScanLab
  • Shadow Economy Dynamics
  • Spec-Driven AI Development
  • Technology
  • Trusted Open Source
  • Uncategorized
  • Universal Intelligence Benchmark
  • War Prediction

About

Stabilarity Research Hub is dedicated to advancing the frontiers of AI, from Medical ML to Anticipatory Intelligence. Our mission is to build robust and efficient AI systems for a safer future.

Language

  • Medical ML Diagnosis
  • AI Economics
  • Cost-Effective AI
  • Anticipatory Intelligence
  • Data Mining
  • 🔑 API for Researchers

Connect

Facebook Group: Join

Telegram: @Y0man

Email: contact@stabilarity.com

© 2026 Stabilarity Research Hub

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme
Stabilarity Research Hub

Open research platform for AI, machine learning, and enterprise technology. All articles are preprints with DOI registration via Zenodo.

185+
Articles
8
Series
DOI
Archived

Research Series

  • Medical ML Diagnosis
  • Anticipatory Intelligence
  • Intellectual Data Analysis
  • AI Economics
  • Cost-Effective AI
  • Spec-Driven AI

Community

  • Join Community
  • MedAI Hack
  • Zenodo Archive
  • Contact Us

Legal

  • Terms of Service
  • About Us
  • Contact
Operated by
Stabilarity OÜ
Registry: 17150040
Estonian Business Register →
© 2026 Stabilarity OÜ. Content licensed under CC BY 4.0
Terms About Contact
Language: 🇬🇧 EN 🇺🇦 UK 🇩🇪 DE 🇵🇱 PL 🇫🇷 FR
Display Settings
Theme
Light
Dark
Auto
Width
Default
Column
Wide
Text 100%

We use cookies to enhance your experience and analyze site traffic. By clicking "Accept All", you consent to our use of cookies. Read our Terms of Service for more information.