A production-grade plugin that pairs Jellyfin with a containerised AI service: 30+ upscalers (Real-ESRGAN, SwinIR, EDVR, AnimeSR, RIFE), camera-style filters, GFPGAN/CodeFormer face restoration, and a live operator console. Runs on NVIDIA CUDA, Intel OpenVINO, AMD ROCm, or CPU.
The AI runtime runs in its own container, isolated from Jellyfin's .NET process. No native DLL conflicts, no GPU driver coupling, update either side independently.
Not a wrapper around one model — 30+ ONNX upscalers across Real-ESRGAN / SwinIR / HAT / EDVR / AnimeSR / RIFE families, with live status + hot-reloadable loader.
Tabbed quick-menu over the Jellyfin video player: Models, Filters (15 presets + 3 live CSS sliders), Realtime. No re-transcode for look tweaks.
Auto-benchmark on first run, pick the best provider (CUDA → DirectML → OpenVINO → ROCm → CPU), log per-model throughput, route jobs accordingly.
Both the plugin and the AI service expose OpenAPI — browse endpoints live, call them from the console, or wire them into your own automation.
No SaaS calls, no telemetry. Optional shared-secret API token between plugin and AI service. Runs fully offline once models are downloaded.
Fixes issue #64 where the nightly Scan & Upscale Library task failed on every item because three plugin singletons (VideoAnalyzer, VideoFrameProcessor, ProcessingMethodExecutor) captured empty ffmpeg / ffprobe paths before Jellyfin's MediaEncoder finished resolving them. New EnsureFFmpegReady() late-resolves and propagates paths on demand.
Two steps — add the plugin, run the AI service.
# 1. Jellyfin: add repository URL under Dashboard → Plugins → Repositories
https://raw.githubusercontent.com/Kuschel-code/JellyfinUpscalerPlugin/main/repository-jellyfin.json
# 2. Host: run the Docker AI service (CUDA build shown)
docker run -d --name jellyfin-ai-upscaler \
--gpus all -p 5000:5000 \
-v upscaler-models:/app/models \
kuscheltier/jellyfin-ai-upscaler:latest-cuda
# 3. Plugin settings → AI Service URL: http://<host>:5000 → Test Connection