Jellyfin AI Upscalerv1.6.1.16

Architecture

How the plugin and the AI service are separated, what talks to what, and why it's designed this way.

Two processes, one contract

┌─────────────────────────────────────┐          ┌───────────────────────────────────┐
│  Jellyfin (dotnet / net9.0)         │          │  Docker AI Service (python/fastapi)│
│                                     │          │                                    │
│  ┌───────────────────────────────┐  │   HTTP   │  ┌──────────────────────────────┐ │
│  │ JellyfinUpscalerPlugin.dll    │──┼─────────→┼──│ /Upscaler/ controller-like   │ │
│  │   ├─ VideoProcessor           │  │  :5000   │  │ /models, /hardware, /metrics │ │
│  │   ├─ VideoAnalyzer (ffprobe)  │  │          │  │ /openapi.json, /health       │ │
│  │   ├─ VideoFrameProcessor      │  │          │  └──────────────────────────────┘ │
│  │   ├─ ProcessingMethodExecutor │  │          │  ┌──────────────────────────────┐ │
│  │   ├─ HttpUpscalerService ─────┼──┘          │  │ ONNX Runtime (CUDA/OV/ROCm)  │ │
│  │   └─ 3 ScheduledTasks         │             │  │ + TensorRT FP16 path         │ │
│  └───────────────────────────────┘             │  └──────────────────────────────┘ │
│  ┌───────────────────────────────┐             │  ┌──────────────────────────────┐ │
│  │ Jellyfin MediaEncoder         │             │  │ /app/models/*.onnx volume    │ │
│  │  ffmpeg / ffprobe binaries    │             │  │ (pulled on demand)           │ │
│  └───────────────────────────────┘             │  └──────────────────────────────┘ │
└─────────────────────────────────────┘          └───────────────────────────────────┘

Why two processes?

Jellyfin-side components

VideoProcessor (core orchestrator)

The plugin singleton. Holds references to MediaEncoder (for ffmpeg/ffprobe paths), HttpUpscalerService (HTTP client), and three sub-services described below. Processes jobs via ProcessVideoAsync.

VideoAnalyzer · VideoFrameProcessor · ProcessingMethodExecutor

Three specialised helpers that cache ffmpeg/ffprobe paths. All three had a cold-start bug (fixed in v1.6.1.16) where the captured paths were empty if Jellyfin constructed the plugin before MediaEncoder.EncoderPath/ProbePath finished resolving. The fix: new EnsureFFmpegReady() in VideoProcessor late-resolves and propagates updated paths via new Update*Path() methods.

HttpUpscalerService

Thin wrapper around IHttpClientFactory. Attaches X-Api-Token header when configured. Handles retries, transparent decompression, and per-call timeouts.

ScheduledTasks

Service-side components

FastAPI application (/app/main.py)

Routes: /health, /status, /metrics, /features, /hardware, /gpus, /models, /models/{id}, /models/{id}/download, /models/{id}/load, /models/{id}/unload, /upscale-frame, /benchmark-frame, /face-restore/*, /logs-stream (SSE).

ONNX Runtime wrapper

Single-process Python wrapping onnxruntime-{gpu,openvino,rocm,cpu}. Hardware-detection at startup picks the execution provider. Each loaded model holds a session + preallocated input/output tensors.

TensorRT path (CUDA only)

On first load of a given (model, resolution) combo on an Ampere+ GPU, the service emits a .plan file for TensorRT FP16. Subsequent sessions pick this up for ~2× speedup.

Model catalog

Declarative manifest at /app/models/catalog.json with each model's download URL, SHA256, scale factor, provider compatibility, and hot-reloadable at runtime.

Data flow: nightly library scan

  1. Jellyfin scheduler fires LibraryUpscaleScanTask.ExecuteAsync.
  2. Plugin calls HttpUpscalerService.GetHealthAsync — aborts if service is offline.
  3. Plugin enumerates Jellyfin virtual folders, filters to those in EnabledLibraryIds (or all if empty).
  4. For each item: VideoAnalyzer.AnalyzeVideoAsync → resolution, fps, codec, HDR, interlaced.
  5. If below threshold: VideoProcessor.ProcessVideoAsyncProcessingMethodExecutor picks pipeline → VideoFrameProcessor.ExtractFramesAsync dumps PNGs to a temp dir.
  6. Plugin POSTs each frame to /upscale-frame → receives upscaled PNG → caches to disk.
  7. VideoFrameProcessor.ReassembleAsync pipes frames + original audio back through ffmpeg with the selected codec.
  8. Output lands alongside the source as <basename>_upscaled.mp4. Jellyfin library watcher picks it up naturally.

Data flow: live quick-menu

  1. User opens a video; Jellyfin web client loads player-integration.js from the plugin.
  2. Script injects the Upscaler button into the player toolbar.
  3. On click, the tabbed overlay renders — fetches /Upscaler/models and /Upscaler/filter-presets in parallel.
  4. Filters tab sliders write directly to <video>.style.filter — 60fps, no round-trip.
  5. Models tab calls POST /Upscaler/models/{id}/load which forwards to the AI service's /models/{id}/load. Service warms the model in GPU memory.
  6. Subsequent frames rendered by Jellyfin go through the loaded model when playback seeks (requires Real-Time AI processing mode).

Failure modes & recovery

FailurePlugin behaviour
Service offlineTest Connection shows red; scheduled task logs and skips cleanly; quick-menu shows "Standalone" badge.
Model download failsModel row flips to error state with the HTTP status message. Retry button re-issues the download.
GPU OOM during loadService returns 507 Insufficient Storage; plugin surfaces the service-provided hint verbatim.
ffmpeg/ffprobe paths unresolvedv1.6.1.16 EnsureFFmpegReady() re-queries MediaEncoder on every entry point. Logs warn if still unresolved instead of silently failing.
Plugin DLL missing transitive NuGetJellyfin tombstones the plugin. Recovery: add the missing DLL, call POST /Plugins/{id}/{version}/Enable.