Skip to content

Stabilarity Hub

Menu
  • Home
  • Research
    • Healthcare & Life Sciences
      • Medical ML Diagnosis
    • Enterprise & Economics
      • AI Economics
      • Cost-Effective AI
      • Spec-Driven AI
    • Geopolitics & Strategy
      • Anticipatory Intelligence
      • Future of AI
      • Geopolitical Risk Intelligence
    • AI & Future Signals
      • Capability–Adoption Gap
      • AI Observability
      • AI Intelligence Architecture
      • AI Memory
      • Trusted Open Source
    • Data Science & Methods
      • HPF-P Framework
      • Intellectual Data Analysis
      • Reference Evaluation
    • Publications
      • External Publications
    • Robotics & Engineering
      • Open Humanoid
    • Benchmarks & Measurement
      • Universal Intelligence Benchmark
      • Shadow Economy Dynamics
      • Article Quality Science
  • Tools
    • Healthcare & Life Sciences
      • ScanLab
      • AI Data Readiness Assessment
    • Enterprise Strategy
      • AI Use Case Classifier
      • ROI Calculator
      • Risk Calculator
      • Reference Trust Analyzer
    • Portfolio & Analytics
      • HPF Portfolio Optimizer
      • Adoption Gap Monitor
      • Data Mining Method Selector
    • Geopolitics & Prediction
      • War Prediction Model
      • Ukraine Crisis Prediction
      • Gap Analyzer
      • Geopolitical Stability Dashboard
    • Technical & Observability
      • OTel AI Inspector
    • Robotics & Engineering
      • Humanoid Simulation
    • Benchmarks
      • UIB Benchmark Tool
  • API Gateway
  • About
    • Contributors
  • Contact
  • Join Community
  • Terms of Service
  • Login
  • Register
Menu

Reference Evaluation System

Methodology Documentation
Automated Reference Quality Evaluation
By Oleh Ivchenko
This page documents the reference evaluation methodology used across all Stabilarity Research Hub publications.

Overview

Every article published on Stabilarity Research Hub undergoes automated reference quality evaluation. The system analyzes all outbound links and citations, classifies them against a database of 900+ academic sources, and computes a composite Trust Score that reflects the overall reliability of an article’s evidence base.

Trust Score

The Trust Score is a weighted composite metric (0-100) computed from six reference quality indicators. Each indicator measures what percentage of an article’s references meet a specific quality criterion.

BadgeMetricWeightDescriptionThreshold
[t]Trusted Sources25%References pointing to sources verified as high-quality in our Sources Database (900+ classified sources)80%
[a]DOI Coverage20%References with a Digital Object Identifier, ensuring persistent citability and discoverability80%
[s]Reviewed Sources20%References from editorially reviewed journals, conferences, or curated academic repositories80%
[i]Indexed15%References with metadata indexed in academic databases (CrossRef, OpenAlex, Semantic Scholar)80%
[b]CrossRef10%References registered and discoverable via the CrossRef API80%
[l]Academic Sources10%References from journals, conference proceedings, or preprint servers (vs blogs, news sites, etc)80%

Formula

The Trust Score is computed as:

Score = Sum(metric_percentage * weight) / Sum(weights) + min(ref_count, 10) * 0.5

Where each metric percentage represents the fraction of references meeting that criterion. A small bonus (up to 5 points) is added for reference count, rewarding articles with broader evidence bases. The final score is capped at 100.

Score Labels

RangeLabelInterpretation
90-100ExcellentOutstanding reference quality across all dimensions
75-89StrongHigh-quality references with broad academic coverage
60-74GoodSolid reference base with room for improvement
40-59FairAcceptable references but gaps in coverage or verification
0-39DevelopingEarly-stage article; references need enrichment

Badge System

Each article displays quality badges — single-character monospace labels that indicate which quality thresholds the article meets. A badge appears when 80% or more of the article’s references satisfy that criterion.

Badge colors follow two groups:

  • Grey (#555) — Academic/quality indicators: [s], [t], [a], [b], [i], [l]
  • Green (#2e7d32) — Achieved badges (threshold met)
  • Gradient — Badges approaching threshold show proportional darkness

Target vs Actual

Each article has a target badge set (default: stabil = all six academic badges). The quality bar on each article shows both target badges and achieved badges, making progress visible.

Verification

Articles that meet their badge targets and pass editorial review receive the [V] Verified mark on their title. Verification requires:

  1. All target badges achieved (80%+ on each targeted metric)
  2. Approval from designated reviewer(s)
  3. No outstanding quality issues

Verified articles display a green Trust Score badge and green [V] mark.

Source Classification

The evaluation system is backed by a database of 900+ classified reference sources across 16 types:

TypeExamplesTrust Level
JournalNature, Science, IEEE Trans.High
PreprintarXiv, bioRxiv, medRxivMedium
ConferenceNeurIPS, ICML, ACLHigh
RepositoryZenodo, Figshare, DryadMedium
GovernmentWHO, FDA, EurostatHigh
EncyclopediaWikipedia, Stanford EncyclopediaMedium
NewsMIT Tech Review, Ars TechnicaLow
BlogPersonal/corporate blogsLow

Each source has flags for trust level, editorial review status, indexing, open access, and pricing — enabling granular quality assessment at the source level.

Evaluation Process

When an article is published or updated, the system:

  1. Extracts all outbound URLs from article content
  2. Registers unknown URLs in the reference database with domain, hash, and metadata
  3. Resolves sources — maps each reference to its source in the Sources Database (auto-creates if missing)
  4. Validates — checks HTTP status, fetches content, extracts DOI metadata via CrossRef/OpenAlex/Semantic Scholar
  5. Flags — propagates source-level trust/review/index flags to each reference
  6. Computes — calculates badge percentages and Trust Score
  7. Caches — stores results as post meta for fast rendering

Reference Trust Analyzer Tool

Try our public tool to evaluate the reference quality of any article or paper by URL:

Reference Trust Analyzer — paste any article URL and get a full trust report.

Recent Posts

  • Comparative Benchmarking: HPF-P vs Traditional Portfolio Methods
  • The Future of Intelligence Measurement: A 10-Year Projection
  • All-You-Can-Eat Agentic AI: The Economics of Unlimited Licensing in an Era of Non-Deterministic Costs
  • The Future of AI Memory — From Fixed Windows to Persistent State
  • FLAI & GROMUS Mathematical Glossary: Complete Variable Reference for Social Media Trend Prediction Models

Research Index

Browse all articles — filter by score, badges, views, series →

Categories

  • ai
  • AI Economics
  • AI Memory
  • AI Observability & Monitoring
  • AI Portfolio Optimisation
  • Ancient IT History
  • Anticipatory Intelligence
  • Article Quality Science
  • Capability-Adoption Gap
  • Cost-Effective Enterprise AI
  • Future of AI
  • Geopolitical Risk Intelligence
  • hackathon
  • healthcare
  • HPF-P Framework
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Open Humanoid
  • Research
  • ScanLab
  • Shadow Economy Dynamics
  • Spec-Driven AI Development
  • Technology
  • Trusted Open Source
  • Uncategorized
  • Universal Intelligence Benchmark
  • War Prediction

About

Stabilarity Research Hub is dedicated to advancing the frontiers of AI, from Medical ML to Anticipatory Intelligence. Our mission is to build robust and efficient AI systems for a safer future.

Language

  • Medical ML Diagnosis
  • AI Economics
  • Cost-Effective AI
  • Anticipatory Intelligence
  • Data Mining
  • 🔑 API for Researchers

Connect

Facebook Group: Join

Telegram: @Y0man

Email: contact@stabilarity.com

© 2026 Stabilarity Research Hub

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme
Stabilarity Research Hub

Open research platform for AI, machine learning, and enterprise technology. All articles are preprints with DOI registration via Zenodo.

185+
Articles
8
Series
DOI
Archived

Research Series

  • Medical ML Diagnosis
  • Anticipatory Intelligence
  • Intellectual Data Analysis
  • AI Economics
  • Cost-Effective AI
  • Spec-Driven AI

Community

  • Join Community
  • MedAI Hack
  • Zenodo Archive
  • Contact Us

Legal

  • Terms of Service
  • About Us
  • Contact
Operated by
Stabilarity OÜ
Registry: 17150040
Estonian Business Register →
© 2026 Stabilarity OÜ. Content licensed under CC BY 4.0
Terms About Contact
Language: 🇬🇧 EN 🇺🇦 UK 🇩🇪 DE 🇵🇱 PL 🇫🇷 FR
Display Settings
Theme
Light
Dark
Auto
Width
Default
Column
Wide
Text 100%

We use cookies to enhance your experience and analyze site traffic. By clicking "Accept All", you consent to our use of cookies. Read our Terms of Service for more information.