Skip to content

Stabilarity Hub

Menu
  • Home
  • Research
    • Healthcare & Life Sciences
      • Medical ML Diagnosis
    • Enterprise & Economics
      • AI Economics
      • Cost-Effective AI
      • Spec-Driven AI
    • Geopolitics & Strategy
      • Anticipatory Intelligence
      • Future of AI
      • Geopolitical Risk Intelligence
    • AI & Future Signals
      • Capability–Adoption Gap
      • AI Observability
      • AI Intelligence Architecture
    • Data Science & Methods
      • HPF-P Framework
      • Intellectual Data Analysis
    • Publications
      • External Publications
    • Robotics & Engineering
      • Open Humanoid
    • Benchmarks & Measurement
      • Universal Intelligence Benchmark
      • Shadow Economy Dynamics
  • Tools
    • Healthcare & Life Sciences
      • ScanLab
      • AI Data Readiness Assessment
    • Enterprise Strategy
      • AI Use Case Classifier
      • ROI Calculator
      • Risk Calculator
    • Portfolio & Analytics
      • HPF Portfolio Optimizer
      • Adoption Gap Monitor
      • Data Mining Method Selector
    • Geopolitics & Prediction
      • War Prediction Model
      • Ukraine Crisis Prediction
      • Gap Analyzer
    • Technical & Observability
      • OTel AI Inspector
    • Robotics & Engineering
      • Humanoid Simulation
    • Benchmarks
      • UIB Benchmark Tool
  • API Gateway
  • About
  • Contact
  • Join Community
  • Terms of Service
  • Geopolitical Stability Dashboard
Menu

Decision Readiness Index (DRI): Measuring Information Sufficiency for Portfolio Decisions

Posted on March 3, 2026March 11, 2026 by
HPF-P FrameworkFramework Research · Article 2 of 6
By Oleh Ivchenko  · HPF-P is a proprietary methodology under active research development.

Decision Readiness Index (DRI): Measuring Information Sufficiency for Portfolio Decisions

📚 Academic Citation: Ivchenko, O. (2026). Decision Readiness Index (DRI): Measuring Information Sufficiency for Portfolio Decisions. Research article: Decision Readiness Index (DRI): Measuring Information Sufficiency for Portfolio Decisions. ONPU. DOI: 10.5281/zenodo.18845429

Author: Ivchenko, Oleh Affiliation: Odessa National Polytechnic University Series: AI Portfolio Optimisation Year: 2025

Abstract

Effective pharmaceutical portfolio optimization requires not only capable algorithms but also information of sufficient quality to support those algorithms. This paper provides a formal specification of the Decision Readiness Index (DRI), the core diagnostic component of the Holistic Portfolio Framework (HPF). DRI quantifies information sufficiency across five dimensions — data completeness (R1), demand signal quality (R2), risk observability (R3), regulatory clarity (R4), and temporal stability (R5) — producing a composite score between 0 and 1 that governs strategy selection in the HPF framework. We present the formal definitions, measurement procedures, aggregation formula, and threshold calibration for each dimension. Practical scoring examples drawn from pharmaceutical portfolio contexts illustrate how DRI responds to real-world information conditions, including data gaps, demand shocks, and regulatory disruptions.

1. Introduction

The Decision Readiness Index (DRI) addresses a question that portfolio optimization frameworks typically bypass: Is the available information good enough to justify a given optimization approach?

Most optimization literature assumes that data quality is a pre-processing concern — clean the data, then optimize. DRI treats data quality as a first-class modeling input, one that directly determines what kind of optimization is appropriate. A portfolio segment with DRI = 0.85 supports multi-objective AI optimization; the same segment with DRI = 0.25 (due to a recent supply chain disruption that has invalidated historical demand patterns) supports only conservative rebalancing.

This paper provides the complete technical specification of DRI, including formal definitions for each dimension, measurement procedures, and the aggregation formula. We also provide practical examples of DRI scoring in pharmaceutical contexts to illustrate how the index responds to real-world information conditions.

2. DRI Formal Definition

The DRI for a portfolio segment $s$ at time $t$ is defined as:

$$DRI{s,t} = \sum{i=1}^{5} wi \cdot Ri(s, t)$$

where $Ri(s,t) \in [0,1]$ are the five dimension scores and $wi \geq 0$ are dimension weights with $\sum{i=1}^{5} wi = 1$. The default configuration uses equal weights $w_i = 0.2$.

Each dimension score is itself a composite of multiple indicators, detailed below.

graph TD
    A[Portfolio Segment s at time t] --> B[R1: Data Completeness]
    A --> C[R2: Demand Signal Quality]
    A --> D[R3: Risk Observability]
    A --> E[R4: Regulatory Clarity]
    A --> F[R5: Temporal Stability]
    B -->|w1=0.2| G["DRI = Σ wi·Ri  ∈ 0,1"]
    C -->|w2=0.2| G
    D -->|w3=0.2| G
    E -->|w4=0.2| G
    F -->|w5=0.2| G
    style G fill:#1a1a2e,color:#fff

3. Dimension R1: Data Completeness

3.1 Definition

R1 measures the proportion of required data fields that are populated with values meeting quality thresholds. It captures the most basic form of information deficiency: simply not having data.

3.2 Formula

$$R1(s,t) = \frac{1}{|F|} \sum{f \in F} \mathbb{1}[qf(s,t) \geq \theta_f]$$

where $F$ is the set of required data fields for segment $s$, $qf(s,t)$ is the quality score for field $f$, and $\thetaf$ is the quality threshold for field $f$.

Field quality $q_f$ accounts for:

  • Presence: Is the field populated? (binary, weight 0.5)
  • Recency: How old is the data? Fields older than 90 days are penalized exponentially.
  • Range validity: Does the value fall within expected bounds? Out-of-range values degrade quality.

3.3 Critical Fields

For pharmaceutical portfolios, critical fields (with high $\thetaf$) include: sales volume (last 90 days), current inventory level, unit cost, selling price, and product registration status. Non-critical fields (with lower $\thetaf$) include: market share estimates, competitor pricing, and qualitative category descriptors.

3.4 Practical Examples

  • Example A (R1 = 0.92): A mature product with full sales history, current inventory data, up-to-date registration, and recent cost data. Only minor non-critical fields are missing.
  • Example B (R1 = 0.41): A product with sales data interrupted by a 4-month logistics disruption, missing current inventory (warehouse inaccessible), and a pending re-registration. Critical fields are compromised.

4. Dimension R2: Demand Signal Quality

4.1 Definition

R2 measures the reliability of demand forecasts. It is not sufficient to have sales data (captured by R1); that data must produce stable, trustworthy demand signals. R2 captures signal-to-noise dynamics and forecast reliability.

4.2 Formula

R2 is composed of three sub-indices:

$$R2(s,t) = \alpha \cdot SNR(s,t) + \beta \cdot FC{rel}(s,t) + \gamma \cdot SB{pen}(s,t)$$

where:

  • $SNR(s,t)$ is the normalized signal-to-noise ratio of the demand series (Coefficient of Variation inverse, bounded to [0,1])
  • $FC_{rel}(s,t)$ is forecast reliability: the proportion of recent forecasts that fell within ±20% of actual values
  • $SB_{pen}(s,t)$ is a structural break penalty: 1 minus the number of detected structural breaks in the past 12 months (bounded to [0,1])
  • Default weights: $\alpha = 0.35$, $\beta = 0.40$, $\gamma = 0.25$

4.3 Structural Break Detection

Structural breaks in demand series are detected using the Chow test applied to rolling 6-month windows. A break is registered when the F-statistic exceeds the 5% critical value. Each detected break reduces $SB_{pen}$ by 0.25, with a floor of 0.

4.4 Practical Examples

  • Example A (R2 = 0.78): Stable demand for a chronic disease medication with consistent monthly sales variance below 15%. Forecast reliability of 82% over the past year. No structural breaks detected.
  • Example B (R2 = 0.29): An OTC product experiencing pandemic-related demand spikes followed by normalization. CV of 68%, forecast reliability of 31%, two structural breaks detected.

5. Dimension R3: Risk Observability

5.1 Definition

R3 measures the degree to which risks affecting the portfolio segment are visible and quantifiable. Unobservable risks — those that cannot be detected from available data — are more dangerous than observable ones, because they cannot be modeled or hedged.

5.2 Formula

$$R3(s,t) = \frac{1}{3}\left[R3{supply}(s,t) + R3{competitive}(s,t) + R3_{financial}(s,t)\right]$$

Each sub-index measures the availability of data needed to assess that risk category:

  • $R3_{supply}$: Availability of supplier reliability data, lead time variability, and alternative supplier counts
  • $R3_{competitive}$: Availability of competitor pricing data, market share estimates, and new entrant tracking
  • $R3_{financial}$: Availability of exchange rate data, credit risk indicators, and payment history

5.3 Practical Examples

  • Example A (R3 = 0.71): A product with three known suppliers (all monitored), stable competitive landscape, and reliable financial settlement history. Most risks are visible.
  • Example B (R3 = 0.22): A product dependent on a single foreign supplier with no alternative sourcing, in a market where competitor data is unavailable and payment delays are systematic.

6. Dimension R4: Regulatory Clarity

6.1 Definition

R4 measures the stability and predictability of the regulatory environment for the portfolio segment. Regulatory uncertainty creates decision-relevant risk that standard financial models cannot capture.

6.2 Formula

$$R4(s,t) = \prod{j \in J} (1 – \lambdaj \cdot u_j(s,t))$$

where $J$ is the set of regulatory risk factors, $uj(s,t) \in [0,1]$ is the uncertainty level for factor $j$, and $\lambdaj$ is the impact weight for factor $j$.

Regulatory risk factors include: registration status stability, pricing regulation changes, import/export restriction risk, and pharmacovigilance requirement changes.

6.3 Practical Examples

  • Example A (R4 = 0.88): A product with stable multi-year registration, no pending regulatory changes, and a well-established pricing regime.
  • Example B (R4 = 0.31): A product with a registration renewal pending (outcome uncertain), subject to a recent price ceiling change, and flagged for enhanced pharmacovigilance review.

7. Dimension R5: Temporal Stability

7.1 Definition

R5 measures the consistency of the data-generating process over time. Even if data is available and risks are observable, the information may not be decision-relevant if the underlying system has changed structurally. R5 captures this “stationarity” of the information environment.

7.2 Formula

$$R5(s,t) = \exp\left(-\kappa \cdot H(s,t)\right)$$

where $H(s,t)$ is an environmental entropy index (see Article 4 for detailed treatment) and $\kappa$ is a decay parameter (default 1.5). $H(s,t)$ integrates geopolitical disruption signals, macroeconomic instability indicators, and market-specific structural change measures.

This exponential decay ensures that R5 approaches zero rapidly under high-entropy conditions, aggressively penalizing the overall DRI during periods of systemic disruption.

7.3 Practical Examples

  • Example A (R5 = 0.82): A product in a stable market with consistent economic conditions, no territorial disruption, and predictable seasonal patterns.
  • Example B (R5 = 0.14): A product in a market experiencing active military conflict, with supply routes disrupted, population displacement reducing local demand, and historical patterns entirely invalidated.
xychart-beta
    title "R5 Temporal Stability: Exponential Decay by Environmental Entropy"
    x-axis "Environmental Entropy H(s,t)" [0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0]
    y-axis "R5 Score" 0 --> 1
    line [1.0, 0.74, 0.55, 0.41, 0.30, 0.22, 0.17, 0.12, 0.09, 0.07, 0.05]

8. Aggregation and Thresholds

8.1 Default Aggregation

With equal weights, the DRI is the arithmetic mean of the five dimension scores. Organizations may calibrate weights based on their specific context — for example, upweighting R4 in markets with high regulatory volatility, or upweighting R5 in post-conflict reconstruction contexts.

8.2 DRI Threshold Calibration

The DRL thresholds (0.20, 0.40, 0.60, 0.80) are theoretically grounded in the optimization literature. Below 0.20, uncertainty is so high that any optimization is likely to produce misleading results. Between 0.20 and 0.40, simple proportional rules can improve on the status quo without requiring reliable forecasts. Between 0.40 and 0.60, linear programming can extract value from partially-reliable data with appropriate constraints. Between 0.60 and 0.80, CVaR optimization can manage tail risks that are now observable. Above 0.80, full multi-objective optimization is justified.

These thresholds have been validated against historical portfolio decisions in the Ukrainian pharmaceutical context, where post-hoc analysis shows that optimization decisions made at inappropriate DRI levels (i.e., using a more sophisticated strategy than DRI supports) systematically underperformed their DRL-appropriate alternatives.

graph LR
    A[DRI Score] --> B{DRI lt 0.20?}
    B -->|Yes| C[DRL 1: Conservative Hold
No Optimization]
    B -->|No| D{DRI lt 0.40?}
    D -->|Yes| E[DRL 2: Proportional Rules
Simple Rebalancing]
    D -->|No| F{DRI lt 0.60?}
    F -->|Yes| G[DRL 3: Linear Programming
Partially Reliable Data]
    F -->|No| H{DRI lt 0.80?}
    H -->|Yes| I[DRL 4: CVaR Optimization
Tail Risk Management]
    H -->|No| J[DRL 5: Multi-Objective
Full Optimization]
    style J fill:#10b981,color:#fff
    style C fill:#dc2626,color:#fff

9. Conclusion

The Decision Readiness Index provides a principled, quantitative foundation for the HPF decision-readiness framework. By decomposing information sufficiency into five orthogonal dimensions and aggregating them into a single actionable score, DRI enables portfolio managers to make explicit the information assumptions underlying their optimization decisions — and to select strategies that are appropriate to current information conditions rather than idealized ones.

References

  • Chow, G. C. (1960). Tests of equality between sets of coefficients in two linear regressions. Econometrica, 28(3), 591–605.
  • Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B, 39(1), 1–22.
  • Little, R. J. A., & Rubin, D. B. (2002). Statistical Analysis with Missing Data (2nd ed.). Wiley.
  • Rockafellar, R. T., & Uryasev, S. (2000). Optimization of conditional value-at-risk. Journal of Risk, 2(3), 21–41.
← Previous
HPF: A Holistic Framework for Decision-Readiness in Pharmaceutical Portfolio Management
Next →
Five-Level Portfolio Optimization: From Abstention to Multi-Objective AI
All HPF-P Framework articles (6)2 / 6
Version History · 1 revisions
+
RevDateStatusActionBySize
v1Mar 11, 2026CURRENTInitial draft
First version created
(w) Author13,122 (+13122)

Versioning is automatic. Each revision reflects editorial updates, reference validation, or formatting changes.

Recent Posts

  • Container Orchestration for AI — Kubernetes Cost Optimization
  • The Computer & Math 33%: Why the Most AI-Capable Occupation Group Still Automates Only a Third of Its Tasks
  • Frontier AI Consolidation Economics: Why the Big Get Bigger
  • Silicon War Economics: The Cost Structure of Chip Nationalism
  • Enterprise AI Agents as the New Insider Threat: A Cost-Effectiveness Analysis of Autonomous Risk

Recent Comments

  1. Oleh on Google Antigravity: Redefining AI-Assisted Software Development

Archives

  • March 2026
  • February 2026

Categories

  • ai
  • AI Economics
  • AI Observability & Monitoring
  • AI Portfolio Optimisation
  • Ancient IT History
  • Anticipatory Intelligence
  • Capability-Adoption Gap
  • Cost-Effective Enterprise AI
  • Future of AI
  • Geopolitical Risk Intelligence
  • hackathon
  • healthcare
  • HPF-P Framework
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Open Humanoid
  • Research
  • Shadow Economy Dynamics
  • Spec-Driven AI Development
  • Technology
  • Uncategorized
  • Universal Intelligence Benchmark
  • War Prediction

About

Stabilarity Research Hub is dedicated to advancing the frontiers of AI, from Medical ML to Anticipatory Intelligence. Our mission is to build robust and efficient AI systems for a safer future.

Language

  • Medical ML Diagnosis
  • AI Economics
  • Cost-Effective AI
  • Anticipatory Intelligence
  • Data Mining
  • 🔑 API for Researchers

Connect

Facebook Group: Join

Telegram: @Y0man

Email: contact@stabilarity.com

© 2026 Stabilarity Research Hub

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme
Stabilarity Research Hub

Open research platform for AI, machine learning, and enterprise technology. All articles are preprints with DOI registration via Zenodo.

185+
Articles
8
Series
DOI
Archived

Research Series

  • Medical ML Diagnosis
  • Anticipatory Intelligence
  • Intellectual Data Analysis
  • AI Economics
  • Cost-Effective AI
  • Spec-Driven AI

Community

  • Join Community
  • MedAI Hack
  • Zenodo Archive
  • Contact Us

Legal

  • Terms of Service
  • About Us
  • Contact
Operated by
Stabilarity OÜ
Registry: 17150040
Estonian Business Register →
© 2026 Stabilarity OÜ. Content licensed under CC BY 4.0
Terms About Contact
Language: 🇬🇧 EN 🇺🇦 UK 🇩🇪 DE 🇵🇱 PL 🇫🇷 FR
Display Settings
Theme
Light
Dark
Auto
Width
Default
Column
Wide
Text 100%

We use cookies to enhance your experience and analyze site traffic. By clicking "Accept All", you consent to our use of cookies. Read our Terms of Service for more information.