Skip to content

Stabilarity Hub

Menu
  • ScanLab
  • Research
    • Medical ML Diagnosis
    • Anticipatory Intelligence
    • Intellectual Data Analysis
    • Ancient IT History
    • Enterprise AI Risk
  • About Us
  • Terms of Service
  • Contact Us
  • Risk Calculator
Menu

Medical ML: ScanLab Integration Specifications β€” Technical Architecture for Ukrainian Healthcare AI

Posted on February 11, 2026February 11, 2026 by






ScanLab Integration Specifications: Technical Architecture for AI-Enhanced Medical Imaging in Ukraine


πŸ”§ ScanLab Integration Specifications: Technical Architecture for AI-Enhanced Medical Imaging in Ukraine

Author: Oleh Ivchenko, PhD Candidate

Affiliations: Odessa National Polytechnic University (ONPU) | Stabilarity Hub

Series: Machine Learning for Medical Diagnosis in Ukraine β€” Article 32 of 35

Date: February 2026

Document Version: 1.0 Technical Specification

Abstract

This technical specification document defines the integration architecture, interface requirements, and implementation standards for deploying artificial intelligence (AI) systems within ScanLab and similar Ukrainian diagnostic imaging facilities. Building upon the pilot program framework established in Article 30 and the comprehensive framework document in Article 31, this specification translates strategic objectives into actionable technical requirements. The document specifies a standards-based integration approach utilizing DICOMweb RESTful services (WADO-RS, STOW-RS, QIDO-RS), HL7 FHIR R4 for clinical data exchange, and IHE profiles including the AI Workflow for Imaging (AI-WI) and AI Results (AIR) profiles for workflow orchestration. The architecture defines a modular inference pipeline supporting both on-premises and hybrid cloud deployment models, with specific consideration for Ukrainian infrastructure constraints including intermittent connectivity and limited GPU availability. Key technical specifications include: inference latency targets of <3 seconds for priority studies and <30 seconds for routine processing; throughput capacity of β‰₯500 studies/day per inference node; 99.5% system availability with automated failover; and comprehensive audit logging compliant with Ukrainian Law No. 2297-VI on Personal Data Protection. The document provides detailed API specifications, data flow diagrams, security protocols (TLS 1.3, OAuth 2.0, certificate-based authentication), and performance benchmarks derived from international deployments. Implementation guidance includes hardware specifications, network topology requirements, and integration test protocols that enable systematic validation before clinical deployment. These specifications serve as the definitive technical reference for ScanLab AI integration, ensuring interoperability, scalability, and regulatory compliance while accommodating the operational realities of Ukrainian healthcare delivery.

<3 Seconds
Target Inference Latency for Priority/Emergency Studies

1. Introduction

The successful deployment of AI-assisted diagnostic systems in medical imaging requires more than algorithm accuracyβ€”it demands seamless integration with existing clinical infrastructure. As established throughout this research series, the gap between promising AI research results and sustainable clinical deployment often derives not from algorithmic limitations but from integration failures: systems that cannot communicate with existing PACS, workflows that disrupt rather than enhance radiologist productivity, and architectures that cannot scale to meet institutional demands.

This specification document addresses these challenges by providing a comprehensive technical blueprint for AI integration at ScanLab and similar Ukrainian diagnostic imaging facilities. The specifications are grounded in international standardsβ€”primarily DICOM, HL7 FHIR, and IHE integration profilesβ€”while explicitly accommodating Ukrainian operational realities including infrastructure variability, regulatory requirements, and wartime operational continuity needs.

1.1 Scope and Objectives

This document specifies:

Specification Scope

  1. System Architecture: Component topology, deployment models, and scalability patterns
  2. Interface Specifications: API definitions for DICOM, HL7 FHIR, and custom REST endpoints
  3. Data Flow Protocols: Image acquisition, inference pipeline, and results delivery workflows
  4. Security Requirements: Authentication, encryption, audit logging, and access control specifications
  5. Performance Standards: Latency, throughput, availability, and resource utilization targets
  6. Integration Test Protocols: Validation procedures for pre-deployment verification
  7. Ukrainian Adaptations: Infrastructure-specific considerations and regulatory compliance requirements

1.2 Standards Foundation

This specification mandates adherence to established healthcare interoperability standards, ensuring vendor-neutral integration and future extensibility:

Standard Version Application Reference
DICOM PS3.2024c Medical image format, storage, retrieval, communication dicomstandard.org
DICOMweb PS3.18 RESTful web services for DICOM objects PS3.18-2024c
HL7 FHIR R4 (4.0.1) Clinical data exchange, ImagingStudy resources hl7.org/fhir/R4
IHE RAD AI-WI Trial Implementation AI Workflow for Imaging profile ihe.net/Radiology
IHE RAD AIR Trial Implementation AI Results presentation and integration ihe.net/Radiology
OAuth 2.0 RFC 6749/6750 Authorization framework tools.ietf.org
TLS 1.3 (RFC 8446) Transport layer security tools.ietf.org
7 Standards
Core Interoperability Standards Mandated for ScanLab Integration

2. System Architecture

The ScanLab AI integration architecture follows a modular, microservices-based design that enables independent scaling of components, supports both on-premises and hybrid cloud deployment, and maintains operational continuity during partial system failures.

2.1 High-Level Architecture Overview

flowchart TB subgraph External["External Systems"] MOD[Imaging Modalities CT/MRI/X-ray/US] RIS[Radiology Information System] EHR[Electronic Health Record System] end subgraph Gateway["Integration Gateway Layer"] DICOMr[DICOM Router/ Orchestrator] HL7E[HL7 FHIR Engine] MWL[Modality Worklist] end subgraph Core["AI Processing Core"] QUEUE[Task Queue Manager] INF1[Inference Node 1 GPU Accelerated] INF2[Inference Node 2 GPU Accelerated] INFN[Inference Node N Scalable] MODEL[Model Repository] end subgraph Storage["Data Storage Layer"] PACS[(PACS/VNA Archive)] FHIR[(FHIR Repository)] AUDIT[(Audit Log Database)] end subgraph Presentation["Results Presentation"] VIEWER[PACS Workstation] REPORT[Structured Report Generator] DASH[Monitoring Dashboard] end MOD -->|DICOM| DICOMr RIS -->|HL7 v2/FHIR| HL7E EHR -->|FHIR R4| HL7E DICOMr -->|WADO-RS| QUEUE HL7E -->|Task Context| QUEUE MWL <-->|Worklist| DICOMr QUEUE --> INF1 QUEUE --> INF2 QUEUE --> INFN MODEL --> INF1 MODEL --> INF2 MODEL --> INFN INF1 -->|STOW-RS| PACS INF2 -->|STOW-RS| PACS INFN -->|STOW-RS| PACS INF1 -->|DICOM SR| REPORT DICOMr -->|C-STORE| PACS HL7E --> FHIR PACS --> VIEWER REPORT --> VIEWER QUEUE --> AUDIT INF1 --> AUDIT AUDIT --> DASH

2.2 Component Specifications

2.2.1 Integration Gateway Layer

The Integration Gateway serves as the primary interface between existing clinical systems and the AI processing infrastructure. It implements protocol translation, message routing, and workflow orchestration.

Component Function Protocols Specifications
DICOM Router/Orchestrator Routes incoming DICOM objects, manages AI task creation DICOM DIMSE, DICOMweb β‰₯1000 concurrent associations; C-STORE, C-FIND, C-MOVE SCP
HL7 FHIR Engine Processes clinical context, patient demographics HL7 v2.x, FHIR R4 ADT, ORM/ORU processing; FHIR ImagingStudy, DiagnosticReport
Modality Worklist Server Provides scheduled procedure information to modalities DICOM MWL C-FIND SCP for worklist queries
Task Orchestrator Implements IHE AI-WI profile task management FHIR Task, REST Task creation, status tracking, priority queuing

2.2.2 AI Processing Core

The processing core executes AI inference pipelines on medical images. The architecture supports horizontal scaling through containerized inference nodes managed by Kubernetes orchestration.

Inference Node Specifications

Compute NVIDIA RTX 4090 or A100 (40GB) GPU; 32+ CPU cores; 128GB RAM minimum
Container Runtime Docker 24.x with NVIDIA Container Toolkit; Kubernetes 1.28+
Inference Framework NVIDIA Triton Inference Server 2.40+ or MONAI Deploy App SDK 0.6+
Model Formats ONNX, TensorRT, PyTorch TorchScript, TensorFlow SavedModel
Storage NVMe SSD β‰₯1TB for model cache; shared storage for input/output
Network 10 Gbps minimum; low-latency interconnect for distributed inference

2.2.3 Model Repository

The Model Repository maintains versioned AI models with metadata, validation certificates, and deployment configurations:

model-repository/
β”œβ”€β”€ chest-xray-detection/
β”‚ β”œβ”€β”€ 1/ # Version 1
β”‚ β”‚ β”œβ”€β”€ model.onnx
β”‚ β”‚ β”œβ”€β”€ config.pbtxt
β”‚ β”‚ └── metadata.json
β”‚ β”œβ”€β”€ 2/ # Version 2 (current)
β”‚ β”‚ β”œβ”€β”€ model.plan # TensorRT optimized
β”‚ β”‚ β”œβ”€β”€ config.pbtxt
β”‚ β”‚ β”œβ”€β”€ metadata.json
β”‚ β”‚ └── validation/
β”‚ β”‚ β”œβ”€β”€ performance_report.json
β”‚ β”‚ └── regulatory_clearance.pdf
β”‚ └── model_info.json
β”œβ”€β”€ ct-lung-nodule/
β”‚ └── ...
└── mri-brain-segmentation/
└── ...

2.3 Deployment Models

ScanLab integration supports three deployment configurations to accommodate varying infrastructure capabilities:

flowchart LR subgraph OnPrem["On-Premises Deployment"] direction TB OP_GW[Gateway] OP_INF[Inference Cluster] OP_PACS[(PACS)] OP_GW --> OP_INF OP_INF --> OP_PACS end subgraph Hybrid["Hybrid Cloud Deployment"] direction TB HY_GW[Gateway] HY_LOCAL[Local Inference] HY_CLOUD[Cloud Inference] HY_PACS[(PACS)] HY_GW --> HY_LOCAL HY_GW -->|VPN/TLS| HY_CLOUD HY_LOCAL --> HY_PACS HY_CLOUD -->|Results| HY_GW end subgraph Cloud["Cloud-Native Deployment"] direction TB CL_GW[Edge Gateway] CL_PROC[Cloud Processing] CL_STORE[(Cloud Archive)] CL_GW -->|DICOMweb| CL_PROC CL_PROC --> CL_STORE end

Model Infrastructure Requirement Latency Profile Ukrainian Suitability
On-Premises Full GPU cluster on-site Lowest latency (<3s) βœ… Best for large urban centers with stable power/network
Hybrid Edge processing + cloud burst Medium (3-15s depending on routing) βœ… Recommended for ScanLabβ€”resilience with cost efficiency
Cloud-Native Minimal on-site infrastructure Higher latency (10-30s) ⚠️ Depends on reliable connectivity; backup for emergencies

πŸ‡ΊπŸ‡¦ Ukrainian Infrastructure Considerations

Recommended: Hybrid Deployment Model

  • Primary pathway: On-premises inference for routine studies ensures operation during network disruptions
  • Cloud burst: Overflow processing during high-volume periods via secure VPN to Azure/GCP Ukraine regions (when available) or EU data centers
  • Offline mode: Gateway queues studies during connectivity loss; processes backlog when restored
  • Power resilience: UPS with minimum 30-minute runtime for graceful shutdown; generator backup recommended

3. Interface Specifications

3.1 DICOMweb Services

The integration implements DICOMweb RESTful services for standards-based image exchange. All services require TLS 1.3 encryption and OAuth 2.0 bearer token authentication.

3.1.1 WADO-RS (Web Access to DICOM Objects - RESTful)

WADO-RS enables retrieval of DICOM studies, series, and instances for AI processing:

GET /dicomweb/studies/{studyInstanceUID}
Purpose Retrieve complete study for AI inference
Accept Headers multipart/related; type="application/dicom"
application/dicom+json (metadata only)
Response Multipart DICOM instances or JSON metadata
Performance Target β‰₯100 MB/s throughput for bulk retrieval
GET /dicomweb/studies/{studyUID}/series/{seriesUID}/instances/{instanceUID}/frames/{frameList}
Purpose Retrieve specific frames for targeted analysis
Accept Headers multipart/related; type="application/octet-stream"
image/jpeg, image/png (rendered)
Use Case Multi-frame CT/MRI series; specific slice retrieval

3.1.2 STOW-RS (Store Over the Web - RESTful)

STOW-RS enables AI systems to store results (secondary captures, segmentation masks, structured reports) back to PACS:

POST /dicomweb/studies
Purpose Store AI-generated DICOM objects
Content-Type multipart/related; type="application/dicom"
Supported SOP Classes Secondary Capture, Segmentation Storage, Comprehensive SR, Enhanced SR
Response XML/JSON with stored instance references
# Example STOW-RS Request for AI Segmentation Result
POST /dicomweb/studies HTTP/1.1
Host: pacs.scanlab.ua
Authorization: Bearer <oauth_token>
Content-Type: multipart/related; type="application/dicom"; boundary=myboundary
Accept: application/dicom+json

--myboundary
Content-Type: application/dicom

<DICOM Segmentation Object - Binary>
--myboundary--

3.1.3 QIDO-RS (Query based on ID for DICOM Objects - RESTful)

QIDO-RS enables querying for studies matching specific criteria, essential for worklist management and retrospective analysis:

GET /dicomweb/studies?PatientID={id}&StudyDate={date}&ModalitiesInStudy={modality}
Purpose Query studies for processing queue population
Supported Parameters PatientID, PatientName, StudyDate, AccessionNumber, ModalitiesInStudy, StudyDescription
Response Format application/dicom+json (array of matching studies)
Pagination limit, offset parameters for large result sets

3.2 HL7 FHIR R4 Resources

FHIR resources provide clinical context for AI processing and enable structured results delivery:

3.2.1 ImagingStudy Resource

{
"resourceType": "ImagingStudy",
"id": "scanlab-study-12345",
"identifier": [{
"system": "urn:dicom:uid",
"value": "urn:oid:1.2.804.114.ScanLab.2026.1.12345"
}],
"status": "available",
"subject": {
"reference": "Patient/ua-patient-67890"
},
"started": "2026-02-11T08:30:00+02:00",
"numberOfSeries": 3,
"numberOfInstances": 245,
"modality": [{
"system": "http://dicom.nema.org/resources/ontology/DCM",
"code": "CT"
}],
"procedureCode": [{
"coding": [{
"system": "http://loinc.org",
"code": "24627-2",
"display": "CT Chest"
}]
}],
"endpoint": [{
"reference": "Endpoint/scanlab-dicomweb"
}],
"series": [...]
}

3.2.2 Task Resource (IHE AI-WI Profile)

The Task resource implements the IHE AI Workflow for Imaging profile, managing AI processing requests:

{
"resourceType": "Task",
"id": "ai-task-98765",
"status": "in-progress",
"intent": "order",
"priority": "urgent",
"code": {
"coding": [{
"system": "http://scanlab.ua/ai-tasks",
"code": "lung-nodule-detection",
"display": "AI Lung Nodule Detection"
}]
},
"focus": {
"reference": "ImagingStudy/scanlab-study-12345"
},
"for": {
"reference": "Patient/ua-patient-67890"
},
"authoredOn": "2026-02-11T08:31:00+02:00",
"lastModified": "2026-02-11T08:31:15+02:00",
"requester": {
"reference": "Device/scanlab-dicom-router"
},
"owner": {
"reference": "Device/inference-node-1"
},
"input": [{
"type": {
"coding": [{
"system": "http://hl7.org/fhir/uv/imaging-ai/CodeSystem/ai-input-type",
"code": "imaging-study"
}]
},
"valueReference": {
"reference": "ImagingStudy/scanlab-study-12345"
}
}],
"output": []
}

3.2.3 DiagnosticReport Resource (AI Results)

AI findings are encoded as DiagnosticReport resources linked to Observation resources for discrete findings:

flowchart LR subgraph FHIR["FHIR Resources"] DR[DiagnosticReport AI Analysis Report] OBS1[Observation Nodule Finding 1] OBS2[Observation Nodule Finding 2] IS[ImagingStudy Source Study] MEDIA[Media Annotated Image] end DR -->|result| OBS1 DR -->|result| OBS2 DR -->|imagingStudy| IS DR -->|media| MEDIA OBS1 -->|derivedFrom| IS OBS2 -->|derivedFrom| IS

3.3 AI Inference API

The inference API provides endpoints for submitting studies to AI models and retrieving results:

3.3.1 Inference Request

POST /api/v1/inference
# Request Body
{
"study_instance_uid": "1.2.804.114.ScanLab.2026.1.12345",
"model_id": "lung-nodule-detection",
"model_version": "2",
"priority": "normal", // "emergency" | "urgent" | "normal" | "routine"
"callback_url": "https://pacs.scanlab.ua/ai-results/callback",
"parameters": {
"confidence_threshold": 0.7,
"output_formats": ["dicom_sr", "dicom_seg", "fhir_observation"],
"include_heatmaps": true
},
"clinical_context": {
"indication": "Follow-up lung nodule surveillance",
"patient_history": "Previous nodule detected 6 months ago"
}
}
# Response
{
"task_id": "ai-task-98765",
"status": "accepted",
"estimated_completion_seconds": 25,
"queue_position": 3,
"links": {
"status": "/api/v1/inference/ai-task-98765/status",
"results": "/api/v1/inference/ai-task-98765/results",
"cancel": "/api/v1/inference/ai-task-98765/cancel"
}
}

3.3.2 Inference Status

GET /api/v1/inference/{task_id}/status
{
"task_id": "ai-task-98765",
"status": "completed", // "queued" | "preprocessing" | "inferring" | "postprocessing" | "completed" | "failed"
"progress_percent": 100,
"started_at": "2026-02-11T08:31:15+02:00",
"completed_at": "2026-02-11T08:31:38+02:00",
"inference_time_ms": 2847,
"total_time_ms": 23124,
"model_info": {
"id": "lung-nodule-detection",
"version": "2",
"regulatory_status": "CE-marked Class IIa"
}
}

3.3.3 Inference Results

GET /api/v1/inference/{task_id}/results
{
"task_id": "ai-task-98765",
"study_instance_uid": "1.2.804.114.ScanLab.2026.1.12345",
"findings": [
{
"finding_id": "f001",
"type": "pulmonary_nodule",
"confidence": 0.94,
"location": {
"series_instance_uid": "1.2.804.114.ScanLab.2026.1.12345.1",
"instance_number": 87,
"coordinates": {"x": 234, "y": 156, "z": 87},
"anatomical_region": {
"system": "http://radlex.org",
"code": "RID1302",
"display": "Right lower lobe"
}
},
"measurements": {
"diameter_mm": 8.3,
"volume_mm3": 302.4
},
"characteristics": {
"texture": "part-solid",
"margin": "spiculated",
"lung_rads_category": "4A"
}
}
],
"output_references": {
"dicom_sr": {
"sop_instance_uid": "1.2.804.114.ScanLab.2026.1.12345.SR.1",
"retrieve_url": "/dicomweb/studies/.../series/.../instances/..."
},
"dicom_segmentation": {
"sop_instance_uid": "1.2.804.114.ScanLab.2026.1.12345.SEG.1",
"retrieve_url": "/dicomweb/studies/.../series/.../instances/..."
},
"fhir_diagnostic_report": {
"reference": "DiagnosticReport/ai-report-98765",
"retrieve_url": "/fhir/DiagnosticReport/ai-report-98765"
}
},
"quality_metrics": {
"input_quality_score": 0.92,
"artifacts_detected": false,
"coverage_complete": true
}
}
3 Output Formats
DICOM SR + DICOM Segmentation + FHIR DiagnosticReport for Maximum Interoperability

4. Data Flow Specifications

4.1 Study Acquisition and Processing Pipeline

The following sequence diagram illustrates the complete data flow from image acquisition through AI processing to results presentation:

sequenceDiagram participant MOD as Modality (CT/MRI) participant ROUTER as DICOM Router participant QUEUE as Task Queue participant INF as Inference Node participant MODEL as Model Repository participant PACS as PACS Archive participant WS as PACS Workstation participant AUDIT as Audit Log Note over MOD,AUDIT: Study Acquisition Phase MOD->>ROUTER: C-STORE (DICOM images) ROUTER->>PACS: Forward C-STORE ROUTER->>AUDIT: Log: Study received ROUTER->>ROUTER: Apply routing rules Note over MOD,AUDIT: AI Task Creation Phase ROUTER->>QUEUE: Create AI Task (FHIR Task) QUEUE->>QUEUE: Prioritize by urgency QUEUE->>AUDIT: Log: Task created Note over MOD,AUDIT: Inference Phase QUEUE->>INF: Assign task INF->>PACS: WADO-RS retrieve study PACS-->>INF: Return DICOM instances INF->>MODEL: Load model (if not cached) MODEL-->>INF: Return model weights INF->>INF: Preprocess images INF->>INF: Run inference INF->>INF: Postprocess results INF->>AUDIT: Log: Inference complete Note over MOD,AUDIT: Results Storage Phase INF->>PACS: STOW-RS (DICOM SR) INF->>PACS: STOW-RS (Segmentation) PACS-->>INF: Confirm storage INF->>QUEUE: Update task: completed Note over MOD,AUDIT: Results Presentation Phase QUEUE->>WS: Notify: AI results available WS->>PACS: Retrieve AI results PACS-->>WS: Return SR + Segmentation WS->>WS: Display with source images

4.2 Priority-Based Routing

The system implements intelligent routing based on study characteristics and clinical urgency:

Priority Level Trigger Criteria Target SLA Queue Behavior
Emergency (P1) ED patient; "STAT" order; stroke/trauma protocol <60 seconds Preempts all; dedicated fast-track node
Urgent (P2) Inpatient; oncology staging; same-day result needed <3 minutes Jump to front of normal queue
Normal (P3) Standard outpatient studies <30 minutes FIFO within priority band
Routine (P4) Screening programs; batch retrospective analysis <4 hours Off-peak processing preferred

flowchart TD START[Study Received] --> CHECK{Priority Assessment} CHECK -->|ED/STAT/Stroke| P1[Emergency Queue P1] CHECK -->|Inpatient/Oncology| P2[Urgent Queue P2] CHECK -->|Outpatient Standard| P3[Normal Queue P3] CHECK -->|Screening/Batch| P4[Routine Queue P4] P1 --> FAST[Fast-Track Inference Node] P2 --> STANDARD[Standard Inference Pool] P3 --> STANDARD P4 --> BATCH[Batch Processing Off-Peak] FAST -->|<60s| RESULT[Results to PACS] STANDARD -->|<30min| RESULT BATCH -->|<4h| RESULT

4.3 Results Encoding Specifications

4.3.1 DICOM Structured Report (SR)

AI findings are encoded in DICOM SR using the Comprehensive SR SOP Class with TID 1500 (Measurement Report):

SOP Class UID 1.2.840.10008.5.1.4.1.1.88.33 (Comprehensive SR)
Template TID 1500 Measurement Report
Coding Schemes RadLex (RID), SNOMED CT (SCT), LOINC (LN)
Image References SCOORD/SCOORD3D for spatial coordinates
Measurements NUM value types with UCUM units

4.3.2 DICOM Segmentation

Anatomical segmentation masks are stored as DICOM Segmentation objects:

SOP Class UID 1.2.840.10008.5.1.4.1.1.66.4 (Segmentation Storage)
Segment Type BINARY or FRACTIONAL (probability maps)
Algorithm Type AUTOMATIC with Segmentation Algorithm Identification sequence
Reference Images Referenced Series Sequence linking to source images

5. Security Specifications

5.1 Authentication and Authorization

sequenceDiagram participant Client participant Gateway as API Gateway participant Auth as OAuth 2.0 Server participant API as AI Service Client->>Auth: Request Token (client_credentials) Auth-->>Client: Access Token (JWT) Client->>Gateway: API Request + Bearer Token Gateway->>Gateway: Validate JWT signature Gateway->>Gateway: Check token claims/scopes alt Token Valid Gateway->>API: Forward request API-->>Gateway: Response Gateway-->>Client: Return response else Token Invalid/Expired Gateway-->>Client: 401 Unauthorized end

Security Control Specification Implementation
Transport Security TLS 1.3 mandatory; TLS 1.2 minimum All endpoints; mutual TLS for DICOM nodes
Authentication OAuth 2.0 with JWT tokens client_credentials grant for system-to-system
Authorization Role-based access control (RBAC) Scopes: dicom:read, dicom:write, inference:execute
Certificate Authentication X.509 certificates for DICOM associations AE Title + certificate mapping
Audit Logging IHE ATNA-compliant audit trail All access events; immutable storage

5.2 Data Protection Requirements

⚠️ Ukrainian Data Protection Compliance

All processing must comply with:

  • Law of Ukraine No. 2297-VI "On Personal Data Protection"
  • Order No. 1110 of MHSU (Electronic Health Records regulation)
  • GDPR principles (as Ukraine harmonizes with EU regulations)

Key requirements:

  • Patient consent documentation before AI processing
  • Data localization: Primary storage within Ukraine; cloud processing only to approved jurisdictions
  • Right to explanation: Patients may request details on AI involvement in their diagnosis
  • Breach notification: 72-hour notification requirement to Data Protection Authority

5.3 Audit Log Specification

All system events are logged in an immutable audit trail conforming to IHE ATNA (Audit Trail and Node Authentication):

{
"event_id": "audit-2026-02-11-083115-001",
"timestamp": "2026-02-11T08:31:15.234+02:00",
"event_type": "AI_INFERENCE_INITIATED",
"event_outcome": "SUCCESS",
"actor": {
"user_id": "system:dicom-router",
"user_role": "SYSTEM",
"network_access_point": "192.168.10.15"
},
"patient": {
"patient_id": "UA-PAT-67890",
"id_type": "National Health ID"
},
"study": {
"study_instance_uid": "1.2.804.114.ScanLab.2026.1.12345",
"accession_number": "SL-2026-001234"
},
"ai_processing": {
"model_id": "lung-nodule-detection",
"model_version": "2",
"task_id": "ai-task-98765"
},
"source_application": {
"ae_title": "SCANLAB_ROUTER",
"application_name": "ScanLab DICOM Router v2.1"
}
}

6. Performance Specifications

6.1 Performance Targets

Metric Target Measurement Method Acceptance Threshold
Emergency Inference Latency <60 seconds Task creation β†’ Results stored 95th percentile
Priority Inference Latency <3 minutes Task creation β†’ Results stored 95th percentile
Normal Inference Latency <30 minutes Task creation β†’ Results stored 95th percentile
Throughput β‰₯500 studies/day/node Studies completed per 24h Average over 7 days
System Availability 99.5% Uptime monitoring Monthly average
DICOM Retrieval Throughput β‰₯100 MB/s WADO-RS bulk retrieval Sustained rate
API Response Time <200 ms Non-inference REST endpoints 99th percentile
Queue Depth Alert <50 pending Normal priority queue Trigger scaling at threshold
99.5%
Target System Availability with Automated Failover

6.2 Scalability Specifications

The architecture supports horizontal scaling through Kubernetes-based orchestration:

flowchart LR subgraph AutoScale["Auto-Scaling Logic"] METRICS[Queue Depth Latency Metrics] HPA[Horizontal Pod Autoscaler] SCALE[Scale Decision] end subgraph Nodes["Inference Pool"] N1[Node 1 GPU] N2[Node 2 GPU] N3[Node 3 GPU] NN[Node N Elastic] end METRICS --> HPA HPA --> SCALE SCALE -->|Scale Up| NN SCALE -->|Scale Down| N3

Minimum Nodes 2 inference nodes (high availability)
Maximum Nodes Configurable; limited by GPU availability (on-prem) or budget (cloud)
Scale-Up Trigger Queue depth >50 OR average latency >target Γ— 1.5
Scale-Down Trigger Queue depth <10 AND average latency <target Γ— 0.5 for 15 minutes
Cold Start Time <90 seconds for new inference node initialization

6.3 Failure Handling and Resilience

flowchart TD START[Task Submitted] --> ATTEMPT1{Attempt 1} ATTEMPT1 -->|Success| DONE[Complete] ATTEMPT1 -->|Failure| RETRY1[Wait 5s] RETRY1 --> ATTEMPT2{Attempt 2 Different Node} ATTEMPT2 -->|Success| DONE ATTEMPT2 -->|Failure| RETRY2[Wait 15s] RETRY2 --> ATTEMPT3{Attempt 3 Different Node} ATTEMPT3 -->|Success| DONE ATTEMPT3 -->|Failure| FAIL[Mark Failed Alert Operations] FAIL --> MANUAL[Manual Review Required]

Failure Type Detection Recovery Action Max Attempts
Inference Node Failure Health check timeout (30s) Reassign to healthy node 3
Model Load Failure Model initialization timeout Retry with fallback version 2
PACS Connectivity DICOM association failure Queue task; exponential backoff 5
Out of Memory GPU OOM exception Route to node with more VRAM 2
Invalid Input Data Preprocessing validation Mark failed; no retry 1

7. Hardware Specifications

7.1 Recommended Hardware Configurations

Component Minimum (Small Facility) Recommended (ScanLab) Enterprise (Hospital Network)
Inference Nodes 1Γ— GPU server 2Γ— GPU servers + cloud burst 4+ GPU servers in cluster
GPU NVIDIA RTX 4080 (16GB) NVIDIA RTX 4090 (24GB) NVIDIA A100 (40/80GB)
CPU 16 cores / 32 threads 32 cores / 64 threads 64+ cores
RAM 64 GB DDR5 128 GB DDR5 256+ GB DDR5
Storage 1 TB NVMe SSD 2 TB NVMe SSD + NAS 4+ TB NVMe + SAN
Network 1 Gbps 10 Gbps 25+ Gbps
Gateway Server 8 cores / 32 GB RAM 16 cores / 64 GB RAM 32 cores / 128 GB RAM

7.2 Network Topology

flowchart TB subgraph Clinical["Clinical Network (VLAN 10)"] MOD1[CT Scanner] MOD2[MRI Scanner] MOD3[X-Ray] PACS[(PACS)] WS[Workstations] end subgraph DMZ["DMZ (VLAN 20)"] FW1[Firewall] LB[Load Balancer] GW[Integration Gateway] end subgraph AI["AI Processing Network (VLAN 30)"] INF1[Inference Node 1] INF2[Inference Node 2] STORE[(Model Storage)] end subgraph Mgmt["Management (VLAN 40)"] MON[Monitoring] LOG[(Audit Logs)] ADMIN[Admin Console] end MOD1 --> PACS MOD2 --> PACS MOD3 --> PACS PACS <--> FW1 WS <--> FW1 FW1 <--> LB LB <--> GW GW <--> INF1 GW <--> INF2 INF1 <--> STORE INF2 <--> STORE INF1 --> MON INF2 --> MON GW --> LOG MON --> ADMIN

7.3 Ukrainian Infrastructure Adaptations

πŸ‡ΊπŸ‡¦ Power and Connectivity Resilience

UPS Capacity Minimum 30 minutes full-load runtime for graceful shutdown
Generator Backup Diesel generator with automatic transfer switch; 24h fuel reserve
Dual ISP Primary fiber + secondary 4G/5G failover for cloud connectivity
Offline Queue Local queue buffer for up to 24 hours of studies during network outage
Local Model Cache All production models cached locally; no cloud dependency for inference

8. Integration Testing Protocol

8.1 Test Categories

Test Category Purpose Pass Criteria
Connectivity Tests Verify DICOM association, FHIR endpoint reachability 100% success rate over 1000 attempts
Functional Tests End-to-end workflow: acquisition β†’ inference β†’ results All 50 test cases pass
Performance Tests Latency and throughput under load Meet SLA targets at 150% expected load
Failover Tests System behavior during component failures Automatic recovery within 60 seconds
Security Tests Authentication, encryption, audit logging Zero security vulnerabilities (critical/high)
Data Integrity Tests Verify no data loss or corruption 100% data integrity across 10,000 studies

8.2 Test Execution Checklist

βœ… Pre-Production Validation Checklist

  1. ☐ DICOM C-ECHO successful to all configured AE Titles
  2. ☐ DICOM C-STORE successful for all modalities (CT, MRI, XR, US)
  3. ☐ WADO-RS retrieval returns valid DICOM within latency target
  4. ☐ STOW-RS successfully stores AI results to PACS
  5. ☐ FHIR ImagingStudy resources correctly created
  6. ☐ FHIR Task workflow progresses through all states
  7. ☐ AI inference completes within SLA for each priority level
  8. ☐ DICOM SR validates against TID 1500 template
  9. ☐ Segmentation objects render correctly in PACS viewer
  10. ☐ OAuth 2.0 token refresh works correctly
  11. ☐ TLS 1.3 verified on all endpoints
  12. ☐ Audit logs capture all required events
  13. ☐ Failover to secondary node within 60 seconds
  14. ☐ Queue recovery after connectivity restoration
  15. ☐ 24-hour stability test without errors

9. Deployment Roadmap

gantt title ScanLab AI Integration Deployment Timeline dateFormat YYYY-MM-DD section Infrastructure Hardware procurement :infra1, 2026-03-01, 4w Network configuration :infra2, after infra1, 2w Server deployment :infra3, after infra2, 2w section Software Gateway installation :sw1, after infra3, 1w PACS integration config :sw2, after sw1, 2w Inference node setup :sw3, after sw1, 2w Model deployment :sw4, after sw3, 1w section Testing Integration testing :test1, after sw4, 3w Performance testing :test2, after test1, 2w Security audit :test3, after test1, 2w section Validation Clinical validation :val1, after test2, 4w User acceptance testing :val2, after val1, 2w section Go-Live Soft launch (limited) :go1, after val2, 2w Full production :go2, after go1, 1w

10. References

  1. Defined, N.C., et al. "Integrating and Adopting AI in the Radiology Workflow: A Primer for Standards and IHE Profiles." Radiology 311(2): e232653, 2024. DOI: 10.1148/radiol.232653
  2. Trivedi, H., et al. "A DICOM Framework for Machine Learning and Processing Pipelines Against Real-time Radiology Images." Journal of Digital Imaging 34(4): 1005-1013, 2021. DOI: 10.1007/s10278-021-00491-w
  3. DICOM Standards Committee. "PS3.18 Web Services." DICOM Standard, 2024c. https://www.dicomstandard.org/current/
  4. HL7 International. "FHIR R4 Specification - ImagingStudy Resource." https://www.hl7.org/fhir/R4/imagingstudy.html
  5. IHE Radiology Technical Committee. "AI Workflow for Imaging (AI-WI) Profile." IHE Radiology Technical Framework Supplement, Trial Implementation, 2024.
  6. IHE Radiology Technical Committee. "AI Results (AIR) Profile." IHE Radiology Technical Framework Supplement, Trial Implementation, 2025.
  7. NVIDIA Corporation. "Clara Deploy SDK Documentation." https://docs.nvidia.com/clara/
  8. Project MONAI. "MONAI Deploy App SDK Documentation." https://docs.monai.io/projects/monai-deploy-app-sdk/
  9. Pianykh, O.S. "DICOMweb: Background and Application of the Web Standard for Medical Imaging." Journal of Digital Imaging 31(3): 321-330, 2018. DOI: 10.1007/s10278-018-0075-8
  10. Law of Ukraine No. 2297-VI "On Personal Data Protection." Verkhovna Rada of Ukraine, 2010.
  11. Thrall, J.H., et al. "Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success." Journal of the American College of Radiology 15(3): 504-508, 2018. DOI: 10.1016/j.jacr.2017.12.026
  12. Google Cloud. "Cloud Healthcare API - DICOMweb." https://cloud.google.com/healthcare-api/docs/dicomweb
  13. Microsoft. "Azure Health Data Services - DICOM Service." https://learn.microsoft.com/azure/healthcare-apis/dicom/
  14. AWS. "Amazon HealthLake Imaging." https://aws.amazon.com/healthlake/imaging/
  15. Langlotz, C.P., et al. "A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging." Radiology 291(3): 781-791, 2019. DOI: 10.1148/radiol.2019190613
  16. RSNA. "Imaging AI in Practice." https://www.rsna.org/ai-imaging
  17. Order No. 1110 of the Ministry of Health of Ukraine "On Electronic Health Records." MHSU, 2023.
  18. Clunie, D.A. "DICOM Structured Reporting." PixelMed Publishing, 2nd Edition, 2022.
  19. Federov, A., et al. "DICOM for Quantitative Imaging Biomarker Development." PeerJ 4: e2057, 2016. DOI: 10.7717/peerj.2057
  20. Kohli, M., et al. "Implementing Machine Learning in Radiology Practice and Research." AJR American Journal of Roentgenology 208(4): 754-760, 2017. DOI: 10.2214/AJR.16.17224
  21. Harvey, H.B., et al. "How the FDA Regulates AI." Academic Radiology 27(1): 58-61, 2020. DOI: 10.1016/j.acra.2019.09.017
  22. European Commission. "Regulation (EU) 2017/745 on Medical Devices (MDR)." Official Journal of the European Union, 2017.
  23. IHE International. "IT Infrastructure Technical Framework." https://www.ihe.net/Technical_Frameworks/
  24. RFC 6749 "The OAuth 2.0 Authorization Framework." IETF, 2012.
  25. RFC 8446 "The Transport Layer Security (TLS) Protocol Version 1.3." IETF, 2018.

11. Appendix: Quick Reference Cards

A.1 DICOMweb Endpoint Summary

Service Method Endpoint Purpose
QIDO-RS GET /dicomweb/studies Search studies
WADO-RS GET /dicomweb/studies/{uid} Retrieve study
WADO-RS GET /dicomweb/studies/{uid}/metadata Retrieve metadata only
STOW-RS POST /dicomweb/studies Store instances
DELETE DELETE /dicomweb/studies/{uid} Delete study (if permitted)

A.2 FHIR Resource Endpoints

Resource Endpoint Operations
ImagingStudy /fhir/ImagingStudy read, search, create
Task (AI-WI) /fhir/Task read, search, create, update
DiagnosticReport /fhir/DiagnosticReport read, search, create
Observation /fhir/Observation read, search, create
Patient /fhir/Patient read, search

Document Version: 1.0 | Last Updated: February 2026
Classification: Technical Specification | Status: Approved for Implementation
Β© 2026 Stabilarity Hub | Odessa National Polytechnic University


Recent Posts

  • AI Economics: Economic Framework for AI Investment Decisions
  • AI Economics: Risk Profiles β€” Narrow vs General-Purpose AI Systems
  • AI Economics: Structural Differences β€” Traditional vs AI Software
  • Enterprise AI Risk: The 80-95% Failure Rate Problem β€” Introduction
  • Data Mining Chapter 4: Taxonomic Framework Overview β€” Classifying the Field

Recent Comments

  1. Oleh on Google Antigravity: Redefining AI-Assisted Software Development

Archives

  • February 2026

Categories

  • ai
  • AI Economics
  • Ancient IT History
  • Anticipatory Intelligence
  • hackathon
  • healthcare
  • innovation
  • Intellectual Data Analysis
  • medai
  • Medical ML Diagnosis
  • Research
  • Technology
  • Uncategorized

Language

© 2026 Stabilarity Hub | Powered by Superbs Personal Blog theme