Jellyfin AI Upscaler v1.6.1.16

Installation

Two components: the Jellyfin plugin (DLL loaded by the server) and the Docker AI service (containerised ONNX runtime). Both are required.

1 · Requirements

ComponentMinimumRecommended
Jellyfin server10.11.010.11.8+ (ABI 10.11.8.0)
.NET runtimenet9.0 (shipped with Jellyfin)
Host OSAny Jellyfin-supportedLinux Docker host
GPU (optional)NVIDIA ≥ 6GB VRAM / Intel Arc / AMD RDNA2+
Docker20.10+24.x with NVIDIA Container Toolkit
RAM4 GB free8 GB+ for heavy models
Disk5 GB20 GB (full model catalog)

2 · Install the plugin (repository)

Jellyfin's plugin manager can pull from any HTTPS JSON manifest. This is the recommended method because updates install automatically.

  1. Open Jellyfin Dashboard → Plugins → Repositories → +.
  2. Add a repository with:
    • Name: AI Upscaler
    • URL:
      https://raw.githubusercontent.com/Kuschel-code/JellyfinUpscalerPlugin/main/repository-jellyfin.json
  3. Click Save, then go to Plugins → Catalog. The AI Upscaler Plugin entry appears under Video Enhancement.
  4. Click Install, then restart Jellyfin when prompted.
  5. Confirm under Plugins → My Plugins the version reads 1.6.1.16.
Note: the plugin uses MD5 checksums (required by Jellyfin's legacy manifest spec) — don't be alarmed that SHA isn't used for the manifest entry.

3 · Install the plugin (manual ZIP)

Use this if your Jellyfin host can't reach GitHub, or you want to pin a specific version.

  1. Download JellyfinUpscalerPlugin-v1.6.1.16.zip from GitHub Releases.
  2. Extract into Jellyfin's plugin folder: <config>/plugins/AI Upscaler Plugin_1.6.1.4/. The folder name is historical — don't rename it.
  3. chown the extracted files to Jellyfin's UID/GID (typically 568:568 on TrueNAS SCALE, 105:108 on LinuxServer.io images).
  4. Restart Jellyfin. The plugin will load the DLL + shipped transitive NuGet dependencies (CliWrap.dll, FFMpegCore.dll, Instances.dll, SixLabors.ImageSharp.dll).
All DLLs must be present. Jellyfin's plugin loader does not transitively resolve NuGets — if one of the four helper DLLs is missing, the plugin silently tombstones with NotSupported. Recovery: call /Plugins/{id}/{version}/Enable after adding the missing DLL.

4 · Install the Docker AI service

Four image variants are published on Docker Hub — pick the one that matches your GPU.

# NVIDIA CUDA (Turing+, 6GB+ VRAM recommended)
docker run -d --name jellyfin-ai-upscaler \
  --gpus all -p 5000:5000 \
  -v upscaler-models:/app/models \
  -e API_TOKEN=change-me-please \
  kuscheltier/jellyfin-ai-upscaler:latest-cuda

# Intel OpenVINO (Arc, iGPU, N100)
docker run -d --name jellyfin-ai-upscaler \
  --device /dev/dri -p 5000:5000 \
  -v upscaler-models:/app/models \
  -e API_TOKEN=change-me-please \
  kuscheltier/jellyfin-ai-upscaler:latest-openvino

# AMD ROCm (RDNA2+)
docker run -d --name jellyfin-ai-upscaler \
  --device=/dev/kfd --device=/dev/dri \
  --group-add video -p 5000:5000 \
  -v upscaler-models:/app/models \
  -e API_TOKEN=change-me-please \
  kuscheltier/jellyfin-ai-upscaler:latest-rocm

# CPU-only fallback (works everywhere, slow)
docker run -d --name jellyfin-ai-upscaler \
  -p 5000:5000 \
  -v upscaler-models:/app/models \
  -e API_TOKEN=change-me-please \
  kuscheltier/jellyfin-ai-upscaler:latest-cpu

5 · Wire plugin → AI service

  1. Open the plugin config: Dashboard → Plugins → AI Upscaler Plugin → Settings.
  2. Under AI Service:
    • URL: http://<docker-host>:5000
    • API Token: the API_TOKEN value from the docker run above (leave blank if you omitted it)
  3. Click Test Connection. You should see a green Online badge and the real auth posture (token-configured vs auth-disabled).
  4. Save. The plugin will immediately call /health, /models, /hardware, and /features to populate the dashboard.

6 · Download models

Models are not shipped in the Docker image — they're pulled on demand to keep the base image small.

  1. Open the plugin config → AI Models tab.
  2. Pick a starter set (the defaults suit most libraries): realesrgan-x4plus (general), animesr-v2-x4 (anime), fsrcnn-x2 (fast, low-VRAM).
  3. Click Download — the service pulls ONNX weights to /app/models/ (persisted on your mount volume).
  4. Once downloaded, click Load to warm the model in GPU memory.

7 · Verify

  1. Open any video in Jellyfin, click the AI Upscaler button that appears next to the subtitle selector.
  2. You should see the tabbed quick-menu (Models / Filters / Realtime) with model icons reflecting their state (downloaded, loaded, busy).
  3. Pick a model → Apply → watch the LIVE indicator pulse green while the filter is active.
Done. Head to Configuration for the complete settings reference, or Troubleshooting if something didn't stick.