Deployment scenarios
Copy-pasteable recipes for the four platforms this plugin is used on most.
TrueNAS SCALE (25.x)
Runs Jellyfin as a custom app. The AI service goes in a second app.
- Jellyfin plugin: upload
JellyfinUpscalerPlugin-v1.6.1.16.zipcontents to/mnt/<pool>/Jellyfin/Config/plugins/AI Upscaler Plugin_1.6.1.4/via Datasets → Edit ACL or Shell.chown 568:568 -Rthe folder. - AI service app: Apps → Discover Apps → Custom App. Minimal compose:
services:
ai-upscaler:
image: kuscheltier/jellyfin-ai-upscaler:latest-cuda
runtime: nvidia
environment:
- API_TOKEN=your-secret-here
- NVIDIA_VISIBLE_DEVICES=all
ports:
- 5000:5000
volumes:
- /mnt/Addon/AI/models:/app/models
restart: unless-stopped
- Jellyfin plugin config: AI Service URL =
http://<truenas-ip>:5000, token matches.
values block is a destructive merge — if you only send one sub-block, TrueNAS replaces entire blocks. Always read the current config with POST /app/config "jellyfin", merge deltas, then PUT the full block.
Unraid (6.12+)
Install Jellyfin via Community Applications, then the AI service as a separate container.
- Apps → Search "AI Upscaler" — or add a manual container template.
- Template fields:
- Repository:
kuscheltier/jellyfin-ai-upscaler:latest-cuda - Network:
bridge - Port:
5000:5000 - Path:
/mnt/user/appdata/ai-upscaler/models→/app/models - Variable:
API_TOKEN= your secret - Extra params:
--gpus all(requires the nvidia-driver plugin)
- Repository:
- Install the plugin DLL in
/mnt/user/appdata/jellyfin/plugins/AI Upscaler Plugin_1.6.1.4/(matches the LinuxServer.io container's plugin path). chown 99:100 -Rthe plugin folder (Unraid's standardnobody/usersUID).
Synology DSM 7.x (CPU-only path)
Container Manager (formerly Docker). Use the CPU image — no GPU passthrough on consumer NAS.
docker run -d --name ai-upscaler \
--restart unless-stopped \
-p 5000:5000 \
-v /volume1/docker/ai-upscaler/models:/app/models \
-e API_TOKEN=your-secret \
kuscheltier/jellyfin-ai-upscaler:latest-cpu
- Jellyfin plugin folder:
/var/packages/Jellyfin/var/plugins/AI Upscaler Plugin_1.6.1.4/(location varies by DSM version). - Scheduled Scan & Upscale Library at 3 AM only — CPU mode is batch-overnight territory.
- Use the lightest models:
fsrcnn-x2,espcn-x4,waifu2x-upconv-x2.
Bare Docker + docker-compose
Full stack on a Linux box. Both Jellyfin and the AI service in one compose file.
version: "3.9"
services:
jellyfin:
image: jellyfin/jellyfin:10.11.8
restart: unless-stopped
network_mode: host
volumes:
- ./jellyfin-config:/config
- ./jellyfin-cache:/cache
- /path/to/media:/media:ro
environment:
- JELLYFIN_PublishedServerUrl=http://192.168.1.10:8096
ai-upscaler:
image: kuscheltier/jellyfin-ai-upscaler:latest-cuda
restart: unless-stopped
runtime: nvidia
ports:
- 5000:5000
environment:
- API_TOKEN=${API_TOKEN}
- NVIDIA_VISIBLE_DEVICES=all
volumes:
- ./ai-models:/app/models
depends_on:
- jellyfin
After docker compose up -d, copy the plugin DLL into ./jellyfin-config/plugins/AI Upscaler Plugin_1.6.1.4/ and restart the Jellyfin container.
Kubernetes
There's no official Helm chart yet (see Roadmap), but the service image runs fine as a standard Deployment with the nvidia.com/gpu device plugin. Minimal manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-upscaler
spec:
replicas: 1
selector:
matchLabels: { app: ai-upscaler }
template:
metadata:
labels: { app: ai-upscaler }
spec:
containers:
- name: service
image: kuscheltier/jellyfin-ai-upscaler:latest-cuda
ports: [{ containerPort: 5000 }]
env:
- name: API_TOKEN
valueFrom: { secretKeyRef: { name: ai-upscaler, key: token } }
resources:
limits: { nvidia.com/gpu: 1 }
volumeMounts:
- { name: models, mountPath: /app/models }
volumes:
- name: models
persistentVolumeClaim: { claimName: ai-upscaler-models }
Windows host
If Jellyfin itself runs on Windows and you have an NVIDIA GPU locally, the AI service can run in WSL 2 with the CUDA-enabled runtime, or via Docker Desktop with GPU passthrough enabled.
- Enable WSL 2 GPU passthrough (Windows 11 + current NVIDIA driver auto-enables this).
wsl --install -d Ubuntu, thendocker run ... --gpus all kuscheltier/jellyfin-ai-upscaler:latest-cuda.- Point the plugin at
http://localhost:5000. - Plugin DLL lives at
%ProgramData%\Jellyfin\Server\plugins\AI Upscaler Plugin_1.6.1.4\.
Reverse proxy (optional)
Expose the AI service's operator dashboard via Traefik / Caddy / nginx if you want browser access without mapping :5000 on the host. Always enforce the X-Api-Token header at the proxy for internet-facing setups.
# Caddyfile fragment
ai.example.com {
@has_token header X-Api-Token change-me-please
handle @has_token {
reverse_proxy ai-upscaler:5000
}
respond 401
}