🔭 OTel AI Inspector
Paste your OpenTelemetry trace JSON — get your AI observability coverage score across L1–L4 layers
📊 4-Layer AI Observability Model
L1 · Infrastructure
Classic OTel spans — service.name, http.*, deployment.environment
L2 · Model Behavior
GenAI conventions — gen_ai.system, request.model, usage.tokens
L3 · Semantic Quality
Quality signals — eval.factuality, hallucination_rate, rag.faithfulness
L4 · Business Impact
Business metrics — task_completed, cost_usd, user_satisfaction
0%
Overall AI Observability Coverage
L1 · Infrastructure
0%
Classic OTel spans, service attributes
L2 · Model Behavior
0%
GenAI semantic conventions
L3 · Semantic Quality
0%
Quality signals, evaluation scores
L4 · Business Impact
0%
Business metrics, user outcomes
📡 API Dependencies
No external APIs required — purely client-side analysis.
OTel trace parsing runs entirely in-browser JavaScript.
No data is sent to external servers.
OTel trace parsing runs entirely in-browser JavaScript.
No data is sent to external servers.
📋 Release Notes
v1.1.0 (2026-03-09)
• Removed all rounded corners for consistent design
• Added 4-layer model guide at top
• Added example preset buttons (Minimal, Partial, Full)
• Added API Dependencies section
• Added Release Notes section
• Improved mobile responsiveness
v1.0.0 (2026-03-04)
• Initial release
• 4-layer scoring model (L1-L4)
• Support for OTLP JSON, Jaeger, flat span arrays
• Python and Node.js code snippet generation
• Shareable URL with base64-encoded trace
• Dark terminal theme
• Removed all rounded corners for consistent design
• Added 4-layer model guide at top
• Added example preset buttons (Minimal, Partial, Full)
• Added API Dependencies section
• Added Release Notes section
• Improved mobile responsiveness
v1.0.0 (2026-03-04)
• Initial release
• 4-layer scoring model (L1-L4)
• Support for OTLP JSON, Jaeger, flat span arrays
• Python and Node.js code snippet generation
• Shareable URL with base64-encoded trace
• Dark terminal theme