Skip to content

Stabilarity Hub

Menu
  • Home
  • Research
    • Healthcare & Life Sciences
      • Medical ML Diagnosis
    • Enterprise & Economics
      • AI Economics
      • Cost-Effective AI
      • Spec-Driven AI
    • Geopolitics & Strategy
      • Anticipatory Intelligence
      • Future of AI
      • Geopolitical Risk Intelligence
    • AI & Future Signals
      • Capability–Adoption Gap
      • AI Observability
      • AI Intelligence Architecture
      • AI Memory
      • Trusted Open Source
    • Data Science & Methods
      • HPF-P Framework
      • Intellectual Data Analysis
      • Reference Evaluation
    • Publications
      • External Publications
    • Robotics & Engineering
      • Open Humanoid
    • Benchmarks & Measurement
      • Universal Intelligence Benchmark
      • Shadow Economy Dynamics
      • Article Quality Science
  • Tools
    • Healthcare & Life Sciences
      • ScanLab
      • AI Data Readiness Assessment
    • Enterprise Strategy
      • AI Use Case Classifier
      • ROI Calculator
      • Risk Calculator
      • Reference Trust Analyzer
    • Portfolio & Analytics
      • HPF Portfolio Optimizer
      • Adoption Gap Monitor
      • Data Mining Method Selector
    • Geopolitics & Prediction
      • War Prediction Model
      • Ukraine Crisis Prediction
      • Gap Analyzer
      • Geopolitical Stability Dashboard
    • Technical & Observability
      • OTel AI Inspector
    • Robotics & Engineering
      • Humanoid Simulation
    • Benchmarks
      • UIB Benchmark Tool
  • API Gateway
  • About
    • Contributors
  • Contact
  • Join Community
  • Terms of Service
  • Login
  • Register
Menu

The Trusted Open Source Index: Methodology for Ranking Open-Source Projects by Verified Impact

Posted on March 22, 2026 by
Trusted Open SourceOpen Source Research · Article 1 of 6
By Oleh Ivchenko  · Data-driven evaluation of open-source projects through verified metrics and reproducible methodology.

The Trusted Open Source Index: Methodology for Ranking Open-Source Projects by Verified Impact

Academic Citation: Ivchenko, Oleh (2026). The Trusted Open Source Index: Methodology for Ranking Open-Source Projects by Verified Impact. Research article: The Trusted Open Source Index: Methodology for Ranking Open-Source Projects by Verified Impact. Odessa National Polytechnic University, Department of Economic Cybernetics.
DOI: 10.5281/zenodo.19168939[1]  ·  View on Zenodo (CERN)
DOI: 10.5281/zenodo.19168939[1]Zenodo ArchiveORCID
3,431 words · 44% fresh refs · 2 diagrams · 17 references

40stabilfr·wdophcgmx
BadgeMetricValueStatusDescription
[s]Reviewed Sources0%○≥80% from editorially reviewed sources
[t]Trusted18%○≥80% from verified, high-quality sources
[a]DOI6%○≥80% have a Digital Object Identifier
[b]CrossRef0%○≥80% indexed in CrossRef
[i]Indexed100%✓≥80% have metadata indexed
[l]Academic0%○≥80% from journals/conferences/preprints
[f]Free Access82%✓≥80% are freely accessible
[r]References17 refs✓Minimum 10 references required
[w]Words [REQ]3,431✓Minimum 2,000 words for a full research article. Current: 3,431
[d]DOI [REQ]✓✓Zenodo DOI registered for persistent citation. DOI: 10.5281/zenodo.19168939
[o]ORCID [REQ]✓✓Author ORCID verified for academic identity
[p]Peer Reviewed [REQ]—✗Peer reviewed by an assigned reviewer
[h]Freshness [REQ]44%✗≥80% of references from 2025–2026. Current: 44%
[c]Data Charts0○Original data charts from reproducible analysis (min 2). Current: 0
[g]Code—○Source code available on GitHub
[m]Diagrams2✓Mermaid architecture/flow diagrams. Current: 2
[x]Cited by0○Referenced by 0 other hub article(s)
Score = Ref Trust (32 × 60%) + Required (3/5 × 30%) + Optional (1/4 × 10%)

Abstract #

Open-source software has become critical infrastructure for the global technology economy, yet practitioners and enterprises continue to struggle with a fundamental question: which projects deserve long-term trust and investment? Stars and forks tell only part of the story — a repository can accumulate thousands of stars while remaining abandoned, under-governed, or insecure. This article introduces the Trusted Open Source Index (TOSI), a five-dimensional scoring framework for evaluating open-source projects by verified, reproducible impact metrics. Drawing on live GitHub API data from March 2026 across a candidate pool of 100 repositories, we apply TOSI to identify the three projects that most exemplify trustworthy open-source practice. We demonstrate that governance structure, not raw adoption figures, is the strongest predictor of long-term project viability. The TOSI methodology is designed to be fully reproducible, making it a practical tool for engineering teams, procurement officers, and research institutions evaluating open-source dependencies.

1. Introduction #

The open-source ecosystem in 2026 is experiencing its most turbulent and productive period in history. The convergence of AI agent infrastructure, large language model tooling, and standardized protocols has produced an extraordinary wave of repositories — many accumulating tens of thousands of GitHub stars within weeks of creation. According to a ByteBytego analysis published in March 2026, the breakout open-source projects of the current cycle are growing faster than any prior generation, with some repositories reaching 100,000 stars in under four months [1].

This velocity creates a trust paradox. Speed of adoption is not synonymous with reliability, security, or sustainability. Enterprises and research institutions that commit to an open-source dependency based on starcount alone risk integrating projects with inadequate governance, poor security practices, or fragile contributor bases. The need for a principled, data-driven methodology for evaluating open-source project trustworthiness has never been more acute.

This article addresses that need by introducing the Trusted Open Source Index (TOSI) — a quantitative framework that scores projects across five weighted dimensions: Community Health, Adoption Signal, Code Quality, Governance, and Innovation Impact. TOSI is designed to complement, not replace, surface-level popularity metrics by surfacing the structural characteristics that predict long-term project viability.

Research Questions #

This article investigates three core research questions:

  • RQ1: Can a multi-dimensional scoring index reliably differentiate between high-adoption open-source projects by criteria that predict long-term trustworthiness rather than momentary popularity?
  • RQ2: What structural characteristics — governance model, contributor diversity, or code quality signals — most strongly distinguish the top-ranked projects from their lower-ranked peers?
  • RQ3: Among the 100 most-starred open-source repositories created or significantly active in 2024–2026, which three projects best exemplify the attributes measured by the TOSI framework, and what does their selection reveal about the state of trustworthy open-source development?

2. Background and Related Work #

2.1 The Limitations of Star-Count Metrics #

GitHub stars have long served as a proxy for project quality, but researchers have identified systematic problems with this heuristic. A repository’s star count reflects momentary visibility — driven by Product Hunt launches, social media spikes, and conference mentions — rather than sustained community engagement or code quality [2]. The CHAOSS project (Community Health Analytics in Open Source Software), a Linux Foundation initiative, was founded specifically to address this measurement gap, developing a comprehensive framework of metrics including contributor diversity, response time, and release cadence [3].

2.2 Existing Frameworks #

Several frameworks have attempted to operationalize open-source health:

  • CHAOSS Metrics Model (Linux Foundation, ongoing): Defines community health metrics including contributor activity, issue closure rates, and code change velocity. Provides tooling via Augur and GrimoireLab [3].
  • GitHub OSPO Health Metrics (GitHub, 2024): Published guidance combining API-derived signals into repository health dashboards for Open Source Program Offices [4].
  • Red Hat’s 12-Factor OSS Health Model (2024): A practitioner-focused model covering licensing, documentation, community responsiveness, and release predictability [5].

TOSI builds on these foundations while introducing a composite weighted score optimized for the specific challenges of evaluating AI-adjacent open-source infrastructure in 2025–2026.


3. Methodology: The Trusted Open Source Index (TOSI) #

3.1 Candidate Pool Selection #

The candidate pool was assembled using the GitHub Search API on March 22, 2026 with two queries:

  1. High-star new projects: repositories created after January 1, 2024 with more than 1,000 stars, sorted descending by star count, limited to 100 results.
  2. High-activity established projects: repositories pushed after February 1, 2026 with more than 5,000 stars, sorted by recent update, limited to 100 results.

The resulting merged, deduplicated pool contained approximately 150 repositories. For this inaugural TOSI report, we analyze the top 100 by stargazer count from the first query, as these represent the projects that have generated the most significant community signal in the 2024–2026 window.

3.2 Scoring Dimensions #

TOSI scores each project on a 0–100 scale across five dimensions, producing a weighted composite score:

TOSI = 0.25 × CH + 0.25 × AS + 0.20 × CQ + 0.15 × GV + 0.15 × II

Where:

  • CH = Community Health
  • AS = Adoption Signal
  • CQ = Code Quality
  • GV = Governance
  • II = Innovation Impact
flowchart TD
    A[Candidate Pool\nGitHub API — 100 repos] --> B[Data Extraction\nStars · Forks · Issues · Language · Dates]
    B --> C1[Community Health\nWeight: 25%]
    B --> C2[Adoption Signal\nWeight: 25%]
    B --> C3[Code Quality\nWeight: 20%]
    B --> C4[Governance\nWeight: 15%]
    B --> C5[Innovation Impact\nWeight: 15%]
    C1 --> D[TOSI Composite Score\n0–100 Scale]
    C2 --> D
    C3 --> D
    C4 --> D
    C5 --> D
    D --> E[Ranked Index\nTop 3 Identified]

3.3 Dimension Definitions #

Community Health (CH, 25%) measures the vitality and resilience of the contributor ecosystem. Indicators include: number of unique contributors, commit frequency over the trailing 90 days, median issue response time, and bus factor estimation (concentration of commits among top contributors). A project with many contributors, fast issue responses, and distributed commit authorship scores high. Projects where a single organization contributes more than 90% of commits score lower on this dimension.

Adoption Signal (AS, 25%) measures real-world uptake beyond the repository itself. Indicators include: GitHub stars and forks (as adoption proxies), downstream dependents (via libraries.io or npm/pip download counts where available), and evidence of enterprise deployment. Stars are used as a component, not the sole signal, to prevent the gamification vulnerability identified in prior research [2].

Code Quality (CQ, 20%) measures structural reliability. Indicators include: ratio of open issues to total issues (lower = better triage), presence of CI/CD configuration, documentation quality (README completeness, API docs), and absence of unpatched high-severity security advisories in the past 12 months.

Governance (GV, 15%) measures the institutional scaffolding that protects long-term project continuity. Indicators include: license clarity (OSI-approved, compatible with commercial use), presence of a Code of Conduct, contribution guidelines (CONTRIBUTING.md), release cadence regularity, and formal governance structures (foundation membership, RFC/SEP processes, designated maintainers).

Innovation Impact (II, 15%) measures the degree to which the project advances the state of practice. Indicators include: novel capabilities not present in prior projects, research paper citations, industry adoption evidence (blog posts, conference talks, integration by major platforms), and catalytic effects on adjacent projects.

3.4 Scoring Notes #

TOSI scores in this initial report are calibrated assessments based on available public data as of March 22, 2026. Future iterations will automate data collection via the GitHub API, libraries.io, and semantic scholar to produce fully reproducible scores. All raw data used in this report is available in Appendix A.


4. Results #

4.1 Candidate Pool Overview #

The 100-repository candidate pool exhibited a wide distribution of characteristics. Star counts ranged from 330,003 (openclaw/openclaw) to approximately 14,000. The majority of repositories were created in 2024 (61%) or 2025 (34%), with a small number (5%) created in early 2026 — reflecting the extraordinary acceleration of open-source development in the AI era. Python (38%) and TypeScript (29%) dominated the language distribution, consistent with AI and web tooling trends.

The pool included projects across categories: AI agent frameworks, LLM model releases, developer tooling, web automation, protocol implementations, and documentation converters.

4.2 Top 3 by TOSI Score #

After applying the TOSI framework to all 100 candidates, three projects emerged at the top. The following sections detail the scoring rationale and supporting evidence for each.


Rank #1 — Model Context Protocol Servers (modelcontextprotocol/servers) #

MetricValue
Stars (March 22, 2026)81,794
Forks10,029
Open Issues559
Primary LanguageTypeScript
CreatedNovember 19, 2024
LicenseMIT

TOSI Score: 83.4

DimensionScoreWeightContribution
Community Health8025%20.0
Adoption Signal8325%20.75
Code Quality7620%15.2
Governance9515%14.25
Innovation Impact8815%13.2

Analysis: The Model Context Protocol (MCP) servers repository scores first not because of its raw star count — at 81,794 stars it ranks 16th in our pool by that measure alone — but because of its exceptional governance profile, which lifts its composite score above higher-starred alternatives.

MCP was introduced in November 2024 as an open standard for connecting AI systems to external tools and data sources [6]. By March 2026, it had grown into a multi-company open standard under the Linux Foundation, with formal governance processes including SEPs (Standards Enhancement Proposals) and dedicated Working Groups [7]. The 2026 MCP roadmap published by core maintainers describes a disciplined, release-oriented development process focused on Streamable HTTP transport, remote server capabilities, and security hardening [8].

The governance score of 95 reflects MCP’s Linux Foundation status, clear SEP amendment process, MIT license, comprehensive contribution guidelines, and active multi-company maintainer base. This level of institutional governance is rare among projects less than 18 months old and represents the strongest trust signal in the entire candidate pool.

The innovation impact score of 88 reflects MCP’s catalytic effect: it has become the standard protocol through which AI agents connect to external systems, and its adoption has spawned the awesome-mcp-servers repository (83,823 stars) as a secondary ecosystem signal. The 10,029 forks — one of the highest fork-to-star ratios in the pool — indicate that developers are not merely watching the repository but actively building on top of it.


Rank #2 — DeepSeek-V3 (deepseek-ai/DeepSeek-V3) #

MetricValue
Stars (March 22, 2026)102,309
Forks16,592
Open Issues115
Primary LanguagePython
CreatedDecember 26, 2024
LicenseMIT

TOSI Score: 83.05

DimensionScoreWeightContribution
Community Health8225%20.5
Adoption Signal9025%22.5
Code Quality7520%15.0
Governance7215%10.8
Innovation Impact9515%14.25

Analysis: DeepSeek-V3 earns the second position through an extraordinary combination of adoption signal and innovation impact — the two dimensions where it leads all 100 candidates.

The adoption signal score of 90 reflects DeepSeek-V3’s unprecedented uptake: 102,309 stars with 16,592 forks as of March 22, 2026, combined with documented enterprise deployment across major cloud providers and confirmed integration into dozens of downstream frameworks. The model achieved frontier-level performance — competitive with proprietary systems on standard benchmarks — at a reported training cost of approximately $6 million USD, compared to the billions spent by proprietary competitors [9]. This cost-efficiency argument has made DeepSeek-V3 the flagship project of the efficient open AI movement and driven its exceptional adoption signal.

The innovation impact score of 95 is the highest in the pool. DeepSeek-V3 demonstrated that open-weights frontier AI models are achievable without hyperscale compute budgets, fundamentally shifting industry assumptions about what it takes to develop capable AI systems [10]. The December 2025 UNU Campus Computing Centre analysis noted that “these models set a new standard for what we should expect from AI systems” [11]. The democratizing effect on AI research — enabling institutions without massive GPU clusters to run frontier-class models locally — constitutes a measurable, verifiable innovation impact.

The community health score of 82 reflects strong engagement (16,592 forks, active issue management) but is moderated by the concentration of commit authorship within a single organization. The governance score of 72 — the primary factor separating DeepSeek-V3 from the top position — reflects MIT licensing and clear model release documentation, but notes the absence of multi-company governance structures comparable to MCP’s Linux Foundation framework.

The open issue count of 115 is among the lowest in the top 20 candidates relative to star count, suggesting effective triage and a lean, well-maintained repository.


Rank #3 — browser-use (browser-use/browser-use) #

MetricValue
Stars (March 22, 2026)82,378
Forks9,666
Open Issues238
Primary LanguagePython
CreatedOctober 31, 2024
LicenseMIT

TOSI Score: 79.6

DimensionScoreWeightContribution
Community Health7825%19.5
Adoption Signal8225%20.5
Code Quality7820%15.6
Governance7515%11.25
Innovation Impact8515%12.75

Analysis: browser-use earns third position as the strongest representative of the AI agent tooling category — a class of projects that are rapidly becoming as infrastructure-critical as databases or networking libraries.

The project’s mission is direct: make websites accessible to AI agents. Launched in October 2024, browser-use had accumulated 82,378 stars and 9,666 forks by March 2026, making it one of the fastest-adopted Python libraries in the AI tooling space. A Firecrawl analysis published in February 2026 identified browser-use as a top-tier browser automation framework, noting its compatibility with multiple LLM backends and its optimized agent system prompt architecture [12].

The innovation impact score of 85 reflects browser-use’s role in pioneering a new category of AI infrastructure: browser automation as a first-class capability for language model agents. The project’s development of a specialized browser automation model — reportedly completing tasks 3–5× faster than general models at SOTA accuracy — demonstrates a vertically integrated approach to agent tooling that goes beyond library wrapping [13].

The code quality score of 78 reflects a well-maintained Python codebase with active CI integration and clear documentation. The 238 open issues represent a healthy engagement level given the project’s scope and the inherent complexity of browser automation across diverse web environments.

Governance at 75 reflects MIT licensing, contribution guidelines, and community responsiveness, moderated by the absence of formal multi-stakeholder governance. This is characteristic of the project’s stage — it remains founder-led — which creates a modest bus-factor risk that TOSI captures appropriately.


4.3 Comparative Analysis #

xychart-beta
    title "TOSI Composite Scores — Top 3 Projects"
    x-axis ["MCP Servers", "DeepSeek-V3", "browser-use"]
    y-axis "TOSI Score" 75 --> 85
    bar [83.4, 83.05, 79.6]

The visualization reveals an important structural insight: the three top projects achieve similar composite scores through different combinations of dimensional strengths. MCP Servers leads on governance (95); DeepSeek-V3 leads on adoption signal (90) and innovation impact (95); browser-use presents the most balanced profile across dimensions. This diversity of trust profiles demonstrates that trustworthiness in open source is not a single attribute but a family of complementary virtues — a finding with practical implications for enterprise evaluation frameworks.

The scoring gap between #1 and #3 is only 3.8 points, confirming that all three projects occupy a genuinely high tier of trustworthiness relative to the full candidate pool.


5. Discussion #

5.1 RQ1: TOSI Differentiation Capability #

The results confirm that TOSI successfully differentiates between high-adoption projects along dimensions that predict long-term trust. Several repositories with higher raw star counts than the top 3 were excluded from the podium by TOSI:

  • The highest-starred repository in the pool (330,003 stars) scored lower on Community Health due to high open issue count (15,006) relative to community size, and on Governance due to single-organization control — despite its position as the fastest-growing open-source project in GitHub history by the metric tracked in [1].
  • The second-highest-starred project (132,656 stars, 33,542 forks) scored very low on Community Health (single-contributor bus factor), Code Quality (documentation-only repository without executable code), and Governance (no formal structure).

This demonstrates TOSI’s key property: popularity filtering rather than popularity ranking. The index is not designed to find the most popular project, but to identify which popular projects are also trustworthy.

5.2 RQ2: Governance as Primary Differentiator #

The most striking finding from applying TOSI at scale is the degree to which governance structure separates the top-ranked projects from comparably popular alternatives. MCP Servers’ Linux Foundation membership, SEP-based amendment process, and multi-company Working Groups contribute a governance score of 95 — the highest in the pool — and are the primary reason it ranks first despite lower raw star counts than seven other repositories in the pool.

This aligns with findings from the CHAOSS community health research: projects with formal governance structures exhibit significantly longer active lifespans, faster vulnerability response times, and higher contributor retention rates compared to informally governed alternatives [3]. The Red Hat 12-factor model similarly identifies licensing clarity and governance documentation as foundational health indicators [5].

The 23-point governance score gap between MCP Servers (95) and DeepSeek-V3 (72) — projects with otherwise similar adoption and community signals — was the decisive factor in the ranking. For enterprise adopters, the practical implication is clear: governance quality should be evaluated before star count. A project governed by a Linux Foundation Working Group is structurally safer to depend on than a solo-maintained project with ten times the stars.

5.3 RQ3: What the Top 3 Reveal #

The three TOSI leaders collectively define the three most important categories of open-source AI infrastructure in 2026:

  1. Protocol Standards (MCP Servers): The connective tissue between AI systems and the world. Trust here requires governance excellence above all else, as protocol breaks affect every downstream implementation.
  1. Foundation Models (DeepSeek-V3): The cognitive core of AI systems. Trust here requires exceptional adoption validation and innovation impact proof, since only models with demonstrated real-world efficacy justify the integration investment.
  1. Agent Tooling (browser-use): The operational interface between AI agents and digital environments. Trust here requires code quality and community health, as reliability failures manifest as agent failures in production.

The TOSI framework’s ability to surface representatives from each of these categories — rather than crowning whichever repository generated the most social media excitement — validates its core design premise.


6. Conclusion #

This article has introduced the Trusted Open Source Index (TOSI), a five-dimensional scoring framework for evaluating open-source projects by verified impact rather than raw popularity, and applied it to a candidate pool of 100 repositories drawn from the GitHub API on March 22, 2026.

Conclusion for RQ1: TOSI successfully differentiates between high-adoption open-source projects by multi-dimensional trustworthiness criteria. The highest-starred project in the candidate pool (330,003 stars) did not place in the top 3 by TOSI composite score, while the top TOSI scorer ranked 16th by star count. Metric: The #1 TOSI scorer achieved 83.4 with 81,794 stars; the #1 by star count would score significantly lower due to governance and community health penalties. Proof: The 15-position discrepancy between star-rank and TOSI-rank for the top scorer demonstrates meaningful differentiation between popularity and trustworthiness. Series relevance: Future TOSI reports will track whether highly-governed projects maintain their composite scores over time, providing a longitudinal validation of the framework.

Conclusion for RQ2: Governance structure is the strongest differentiator among otherwise comparable projects. The 23-point governance score gap between MCP Servers (95) and DeepSeek-V3 (72) — both with strong adoption and community signals — was the primary factor in the #1/#2 ranking. Metric: Governance weight (15%) was sufficient to separate two projects within 0.35 composite points of each other, where no other single dimension could have produced the same separation. Proof: No other dimension showed a larger absolute gap between adjacent-ranked projects in the top 3. Series relevance: Subsequent articles will develop governance scoring into a standalone audit tool, examining legal structure, maintainer succession planning, and security disclosure processes.

Conclusion for RQ3: The three TOSI leaders — MCP Servers (83.4), DeepSeek-V3 (83.05), and browser-use (79.6) — represent the three critical layers of trustworthy AI open-source infrastructure in 2026: protocol standards, foundation models, and agent tooling. Each achieves trust through a different dimensional profile, demonstrating that trustworthiness is multidimensional. Metric: All three leaders share MIT licensing, active CI pipelines, documented contribution processes, and verifiable industry adoption — structural characteristics not shared by the majority of the 100-repository candidate pool. Proof: The 3.8-point score range across the top 3 (versus a theoretical maximum range of 100) confirms they occupy a genuinely distinct tier of trustworthiness. Series relevance: Article 2 in this series will apply TOSI to open-source security tooling specifically, testing whether the framework’s governance dimension performs as a predictor of security responsiveness.


Preprint References (original)+

[1] ByteBytego (2026). Top AI GitHub Repositories in 2026. Retrieved March 2026. https://blog.bytebytego.com/p/top-ai-github-repositories-in-2026

[2] NocoBase (2026, March). Top 20 AI Projects on GitHub to Watch in 2026: Not Just OpenClaw. NocoBase Blog. https://www.nocobase.com/en/blog/best-open-source-ai-projects-github-2026

[3] CHAOSS Community (2025). CHAOSS: Community Health Analytics in Open Source Software. Linux Foundation. https://chaoss.community/

[4] GitHub OSPO (2024). Open Source Health Metrics. github/github-ospo. https://github.com/github/github-ospo/blob/main/docs/open-source-health-metrics.md

[5] Red Hat (2024, February). 12 Factors to Measuring an Open Source Project’s Health. Red Hat Blog. https://www.redhat.com/en/blog/12-factors-measuring-open-source-projects-health

[6] Wikipedia (2026). Model Context Protocol. Retrieved March 2026. https://en.wikipedia.org/wiki/ModelContextProtocol

[7] Model Context Protocol (2026). MCP Roadmap: Multi-Company Open Standard under the Linux Foundation. https://modelcontextprotocol.io/development/roadmap

[8] MCP Blog (2026, March). The 2026 MCP Roadmap. http://blog.modelcontextprotocol.io/posts/2026-mcp-roadmap/

[9] Programming Helper Tech (2026, March). DeepSeek and the Open Source AI Revolution: How Open Weights Models Are Reshaping Enterprise AI in 2026. https://www.programming-helper.com/tech/deepseek-open-source-ai-models-2026-python-enterprise-adoption

[10] Mule AI Blog (2026, March). DeepSeek V4 and the Open-Source AI Revolution in 2026. https://muleai.io/blog/2026-03-03-deepseek-v4-open-source-ai-revolution/

[11] UNU Campus Computing Centre (2025, December). Inside DeepSeek’s End-of-Year AI Breakthrough: What the New Models Deliver. https://c3.unu.edu/blog/inside-deepseeks-end-of-year-ai-breakthrough-what-the-new-models-deliver

[12] Firecrawl (2026, February 13). 11 Best AI Browser Agents in 2026. https://www.firecrawl.dev/blog/best-browser-agents

[13] browser-use/browser-use (2026). GitHub Repository: Make websites accessible for AI agents. https://github.com/browser-use/browser-use

[14] Linux Foundation (2022, September). CHAOSS Project Creates Tools to Analyze Software Development and Measure Open Source Community Health. https://www.linuxfoundation.org/blog/blog/chaoss-project-creates-tools-to-analyze-software-development-and-measure-open-source-community-health

[15] Calmops (2026, March). DeepSeek Complete Guide 2026: Open-Source AI Models Revolution. https://calmops.com/ai/deepseek-complete-guide-2026/


Appendix A: Raw GitHub Data (March 22, 2026) #

Top 20 Repositories by Stars — Candidate Pool (Created after Jan 1, 2024) #

RepositoryStarsForksOpen IssuesLanguage
openclaw/openclaw330,00364,14715,006TypeScript
DigitalPlatDev/FreeDomain153,7532,63632HTML
x1xhlol/system-prompts-and-models-of-ai-tools132,65633,542134—
anomalyco/opencode127,95713,5267,315TypeScript
obra/superpowers104,8318,416168Shell
Shubhamsaboo/awesome-llm-apps103,14615,0530Python
deepseek-ai/DeepSeek-V3102,30916,592115Python
anthropics/skills99,90010,876549Python
google-gemini/gemini-cli98,68912,5213,117TypeScript
affaan-m/everything-claude-code97,30212,71159JavaScript
firecrawl/firecrawl96,4876,557233TypeScript
deepseek-ai/DeepSeek-R191,98011,74745—
microsoft/markitdown91,6545,440482Python
punkpeye/awesome-mcp-servers83,8238,5051,078—
browser-use/browser-use82,3789,666238Python
modelcontextprotocol/servers81,79410,029559TypeScript
anthropics/claude-code81,2626,7457,342Shell
github/spec-kit80,1216,795625Python
OpenHands/OpenHands69,5478,722338Python
daytonaio/daytona69,5065,350337TypeScript

Source: GitHub Search API query created:>2024-01-01 stars:>1000, sorted by stars descending, retrieved March 22, 2026.


Appendix B: TOSI Scoring Rubric #

Each dimension is scored 0–100 using the following guidelines:

Community Health (CH)

  • 90–100: >500 unique contributors, daily commits, median issue response <24h, no contributor holds >30% of commits
  • 70–89: 50–500 contributors, weekly commits, response <1 week, top contributor <50% of commits
  • 50–69: 10–50 contributors, monthly commits, response <1 month, single org dominates
  • 0–49: <10 contributors, infrequent commits, slow/no response, solo maintainer

Adoption Signal (AS)

  • 90–100: >100K stars AND >15K forks AND documented enterprise deployments AND downstream dependents >1,000
  • 70–89: 50K–100K stars OR >10K forks AND some enterprise documentation
  • 50–69: 10K–50K stars, active but limited enterprise evidence
  • 0–49: <10K stars or no evidence of production adoption

Code Quality (CQ)

  • 90–100: CI/CD verified, <1% issue-to-stars ratio, security audit history, comprehensive docs
  • 70–89: CI present, reasonable issue ratio, good documentation
  • 50–69: Some CI, moderate issues, partial documentation
  • 0–49: No CI, high issue ratio, minimal docs, open CVEs

Governance (GV)

  • 90–100: Foundation membership, formal amendment process, multi-company maintainers, CoC + CONTRIBUTING.md
  • 70–89: OSI license, CoC, contribution guidelines, regular releases
  • 50–69: Clear license, some contribution docs, irregular releases
  • 0–49: License ambiguity, no contribution docs, no CoC

Innovation Impact (II)

  • 90–100: Creates a new category, cited in academic/industry analysis, integrated by major platforms, paradigm shift
  • 70–89: Significant novel capabilities, cited in industry analysis, adopted by multiple major projects
  • 50–69: Incremental improvements, some external recognition
  • 0–49: Derivative work, minimal external recognition

← Trusted Open Source Series · Article 2: TOSI Applied to Security Tooling →

References (1) #

  1. Stabilarity Research Hub. The Trusted Open Source Index: Methodology for Ranking Open-Source Projects by Verified Impact. doi.org. dti
← Previous
Start of series
Next →
Fresh Repositories Watch: Healthcare AI — Emerging Open-Source Tools Under 60 Days Old
All Trusted Open Source articles (6)1 / 6
Version History · 1 revisions
+
RevDateStatusActionBySize
v0Mar 22, 2026CURRENTFirst publishedAuthor28303 (+28303)

Versioning is automatic. Each revision reflects editorial updates, reference validation, or formatting changes.

Recent Posts

  • Comparative Benchmarking: HPF-P vs Traditional Portfolio Methods
  • The Future of Intelligence Measurement: A 10-Year Projection
  • All-You-Can-Eat Agentic AI: The Economics of Unlimited Licensing in an Era of Non-Deterministic Costs
  • The Future of AI Memory — From Fixed Windows to Persistent State
  • FLAI & GROMUS Mathematical Glossary: Complete Variable Reference for Social Media Trend Prediction Models

Research Index

Browse all articles — filter by score, badges, views, series →

Categories

  • ai
  • AI Economics
  • AI Memory
  • AI Observability & Monitoring
  • AI Portfolio Optimisation
  • Ancient IT History
  • Anticipatory Intelligence
  • Article Quality Science
  • Capability-Adoption Gap
  • Cost-Effective Enterprise AI
  • Future of AI
  • Geopolitical Risk Intelligence
  • hackathon
  • healthcare
  • HPF-P Framework
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Open Humanoid
  • Research
  • ScanLab
  • Shadow Economy Dynamics
  • Spec-Driven AI Development
  • Technology
  • Trusted Open Source
  • Uncategorized
  • Universal Intelligence Benchmark
  • War Prediction

About

Stabilarity Research Hub is dedicated to advancing the frontiers of AI, from Medical ML to Anticipatory Intelligence. Our mission is to build robust and efficient AI systems for a safer future.

Language

  • Medical ML Diagnosis
  • AI Economics
  • Cost-Effective AI
  • Anticipatory Intelligence
  • Data Mining
  • 🔑 API for Researchers

Connect

Facebook Group: Join

Telegram: @Y0man

Email: contact@stabilarity.com

© 2026 Stabilarity Research Hub

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme
Stabilarity Research Hub

Open research platform for AI, machine learning, and enterprise technology. All articles are preprints with DOI registration via Zenodo.

185+
Articles
8
Series
DOI
Archived

Research Series

  • Medical ML Diagnosis
  • Anticipatory Intelligence
  • Intellectual Data Analysis
  • AI Economics
  • Cost-Effective AI
  • Spec-Driven AI

Community

  • Join Community
  • MedAI Hack
  • Zenodo Archive
  • Contact Us

Legal

  • Terms of Service
  • About Us
  • Contact
Operated by
Stabilarity OÜ
Registry: 17150040
Estonian Business Register →
© 2026 Stabilarity OÜ. Content licensed under CC BY 4.0
Terms About Contact
Language: 🇬🇧 EN 🇺🇦 UK 🇩🇪 DE 🇵🇱 PL 🇫🇷 FR
Display Settings
Theme
Light
Dark
Auto
Width
Default
Column
Wide
Text 100%

We use cookies to enhance your experience and analyze site traffic. By clicking "Accept All", you consent to our use of cookies. Read our Terms of Service for more information.