Jellyfin AI Upscalerv1.6.1.16

Deployment scenarios

Copy-pasteable recipes for the four platforms this plugin is used on most.

TrueNAS SCALE (25.x)

Runs Jellyfin as a custom app. The AI service goes in a second app.

  1. Jellyfin plugin: upload JellyfinUpscalerPlugin-v1.6.1.16.zip contents to /mnt/<pool>/Jellyfin/Config/plugins/AI Upscaler Plugin_1.6.1.4/ via Datasets → Edit ACL or Shell. chown 568:568 -R the folder.
  2. AI service app: Apps → Discover Apps → Custom App. Minimal compose:
services:
  ai-upscaler:
    image: kuscheltier/jellyfin-ai-upscaler:latest-cuda
    runtime: nvidia
    environment:
      - API_TOKEN=your-secret-here
      - NVIDIA_VISIBLE_DEVICES=all
    ports:
      - 5000:5000
    volumes:
      - /mnt/Addon/AI/models:/app/models
    restart: unless-stopped
  1. Jellyfin plugin config: AI Service URL = http://<truenas-ip>:5000, token matches.
TrueNAS 25.x gotcha: when editing a catalog app (Jellyfin), the values block is a destructive merge — if you only send one sub-block, TrueNAS replaces entire blocks. Always read the current config with POST /app/config "jellyfin", merge deltas, then PUT the full block.

Unraid (6.12+)

Install Jellyfin via Community Applications, then the AI service as a separate container.

  1. Apps → Search "AI Upscaler" — or add a manual container template.
  2. Template fields:
    • Repository: kuscheltier/jellyfin-ai-upscaler:latest-cuda
    • Network: bridge
    • Port: 5000:5000
    • Path: /mnt/user/appdata/ai-upscaler/models/app/models
    • Variable: API_TOKEN = your secret
    • Extra params: --gpus all (requires the nvidia-driver plugin)
  3. Install the plugin DLL in /mnt/user/appdata/jellyfin/plugins/AI Upscaler Plugin_1.6.1.4/ (matches the LinuxServer.io container's plugin path).
  4. chown 99:100 -R the plugin folder (Unraid's standard nobody/users UID).

Synology DSM 7.x (CPU-only path)

Container Manager (formerly Docker). Use the CPU image — no GPU passthrough on consumer NAS.

docker run -d --name ai-upscaler \
  --restart unless-stopped \
  -p 5000:5000 \
  -v /volume1/docker/ai-upscaler/models:/app/models \
  -e API_TOKEN=your-secret \
  kuscheltier/jellyfin-ai-upscaler:latest-cpu

Bare Docker + docker-compose

Full stack on a Linux box. Both Jellyfin and the AI service in one compose file.

version: "3.9"

services:
  jellyfin:
    image: jellyfin/jellyfin:10.11.8
    restart: unless-stopped
    network_mode: host
    volumes:
      - ./jellyfin-config:/config
      - ./jellyfin-cache:/cache
      - /path/to/media:/media:ro
    environment:
      - JELLYFIN_PublishedServerUrl=http://192.168.1.10:8096

  ai-upscaler:
    image: kuscheltier/jellyfin-ai-upscaler:latest-cuda
    restart: unless-stopped
    runtime: nvidia
    ports:
      - 5000:5000
    environment:
      - API_TOKEN=${API_TOKEN}
      - NVIDIA_VISIBLE_DEVICES=all
    volumes:
      - ./ai-models:/app/models
    depends_on:
      - jellyfin

After docker compose up -d, copy the plugin DLL into ./jellyfin-config/plugins/AI Upscaler Plugin_1.6.1.4/ and restart the Jellyfin container.

Kubernetes

There's no official Helm chart yet (see Roadmap), but the service image runs fine as a standard Deployment with the nvidia.com/gpu device plugin. Minimal manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ai-upscaler
spec:
  replicas: 1
  selector:
    matchLabels: { app: ai-upscaler }
  template:
    metadata:
      labels: { app: ai-upscaler }
    spec:
      containers:
      - name: service
        image: kuscheltier/jellyfin-ai-upscaler:latest-cuda
        ports: [{ containerPort: 5000 }]
        env:
        - name: API_TOKEN
          valueFrom: { secretKeyRef: { name: ai-upscaler, key: token } }
        resources:
          limits: { nvidia.com/gpu: 1 }
        volumeMounts:
        - { name: models, mountPath: /app/models }
      volumes:
      - name: models
        persistentVolumeClaim: { claimName: ai-upscaler-models }

Windows host

If Jellyfin itself runs on Windows and you have an NVIDIA GPU locally, the AI service can run in WSL 2 with the CUDA-enabled runtime, or via Docker Desktop with GPU passthrough enabled.

Reverse proxy (optional)

Expose the AI service's operator dashboard via Traefik / Caddy / nginx if you want browser access without mapping :5000 on the host. Always enforce the X-Api-Token header at the proxy for internet-facing setups.

# Caddyfile fragment
ai.example.com {
  @has_token header X-Api-Token change-me-please
  handle @has_token {
    reverse_proxy ai-upscaler:5000
  }
  respond 401
}