Jellyfin AI Upscaler v1.6.1.16
Jellyfin 10.11.x · Opus 4.7 co-engineered

Upscale your Jellyfin library with real AI models — not just bicubic.

A production-grade plugin that pairs Jellyfin with a containerised AI service: 30+ upscalers (Real-ESRGAN, SwinIR, EDVR, AnimeSR, RIFE), camera-style filters, GFPGAN/CodeFormer face restoration, and a live operator console. Runs on NVIDIA CUDA, Intel OpenVINO, AMD ROCm, or CPU.

30+
AI upscale models
4
Hardware backends
15
Camera-style filters
6
UI languages

What makes it different

Docker microservice architecture

The AI runtime runs in its own container, isolated from Jellyfin's .NET process. No native DLL conflicts, no GPU driver coupling, update either side independently.

Real model catalog

Not a wrapper around one model — 30+ ONNX upscalers across Real-ESRGAN / SwinIR / HAT / EDVR / AnimeSR / RIFE families, with live status + hot-reloadable loader.

Live in-player controls

Tabbed quick-menu over the Jellyfin video player: Models, Filters (15 presets + 3 live CSS sliders), Realtime. No re-transcode for look tweaks.

Hardware-aware routing

Auto-benchmark on first run, pick the best provider (CUDA → DirectML → OpenVINO → ROCm → CPU), log per-model throughput, route jobs accordingly.

Inspectable REST API

Both the plugin and the AI service expose OpenAPI — browse endpoints live, call them from the console, or wire them into your own automation.

Self-hosted & private

No SaaS calls, no telemetry. Optional shared-secret API token between plugin and AI service. Runs fully offline once models are downloaded.

Latest release

v1.6.1.16 · Hotfix: FFmpeg / FFprobe late-resolution

Fixes issue #64 where the nightly Scan & Upscale Library task failed on every item because three plugin singletons (VideoAnalyzer, VideoFrameProcessor, ProcessingMethodExecutor) captured empty ffmpeg / ffprobe paths before Jellyfin's MediaEncoder finished resolving them. New EnsureFFmpegReady() late-resolves and propagates paths on demand.

Full changelog → Release on GitHub →

Who it's for

Quick start

Two steps — add the plugin, run the AI service.

# 1. Jellyfin: add repository URL under Dashboard → Plugins → Repositories
https://raw.githubusercontent.com/Kuschel-code/JellyfinUpscalerPlugin/main/repository-jellyfin.json

# 2. Host: run the Docker AI service (CUDA build shown)
docker run -d --name jellyfin-ai-upscaler \
  --gpus all -p 5000:5000 \
  -v upscaler-models:/app/models \
  kuscheltier/jellyfin-ai-upscaler:latest-cuda

# 3. Plugin settings → AI Service URL: http://<host>:5000 → Test Connection

Full install guide →