Roadmap
Direction, not commitments. These are the things that are queued for the next few releases and the larger tracks that follow.
v1.6.2 — Stability & configuration polish
- queued Plugin config UI: replace the 6-tab pager with a sticky sidebar so config/filters/models are visible together.
- queued First-run wizard that detects hardware, tests the AI service, and picks a sensible model set automatically.
- queued Per-library profile presets (different settings for your anime library vs. your movies library).
- investigation Harden retry policy in
HttpUpscalerService— some users report gateway timeouts under heavy batch load.
v1.7 — Pre-processing pipeline expansion
- planned VapourSynth-compatible filter graph as an alternative to the current ffmpeg filter chain.
- planned Deinterlacing preset (QTGMC-equivalent ONNX model) for DVD/VHS sources.
- planned Optional pre-pass grain synthesis on top of restore, so the output doesn't look plasticky.
- stretch Multi-resolution output (generate both 1440p and 4K in one pass, share extraction).
v1.8 — Model ecosystem
- planned First-class "bring your own ONNX" workflow: drop a file into a well-known folder, fill a small metadata form in-UI, model appears in the catalog.
- planned Quantization-on-load (INT8) for older GPUs with 4 GB VRAM.
- planned Model-hosting plan: replace the current CDN pulls with a verifiable mirror on our side + checksum repo.
- stretch Fine-tuning UI: upload 5-10 reference stills of your content, service trains a LoRA for the selected base model.
Continuous tracks
| Track | State | Notes |
|---|---|---|
| Kubernetes Helm chart | in-progress | Basic chart exists externally, needs upstreaming with value schemas. |
| Localization (i18n) | stable | EN, DE, FR, ES, IT, JA, ZH, KO shipped. Community PRs welcome for more. |
| Benchmark telemetry opt-in | investigation | Allow optional sharing of anonymised benchmark numbers to seed the recommendation engine. |
| ARM64 images | blocked | ONNX Runtime on ARM lacks the provider coverage we need. Revisit once onnxruntime-arm64-cuda ships. |
| macOS / Metal provider | not planned | No Metal execution provider in ONNX Runtime; would require CoreML translation which is its own stack. |
How priorities get set
- Crashes and data-loss bugs jump to the front, always.
- Issues with ≥10 thumbs-up reactions on GitHub get pulled into the next minor.
- Sponsor requests weigh heavily but don't override crashes or stability work.
- "Shiny" model-of-the-week features wait until the surrounding plumbing (config UI, catalog) is ready to host them.
What's not on the roadmap
- Integration with closed-source upscalers (Topaz Video AI, NVIDIA VSR). Licensing and distribution don't work for an open-source plugin.
- Paid tiers or cloud inference. The service is designed to run on the same LAN as Jellyfin; pushing it to a cloud provider removes the privacy + latency guarantees that make this plugin useful.
- Supporting Jellyfin 10.8.x. We follow the current Jellyfin stable (10.11.x) and one LTS back. Older Jellyfin branches don't get new plugin versions.
Weigh in
Roadmap reviews happen on the GitHub Discussions page. Open a thread under Ideas to propose an item, or comment on an existing one to push it up the stack.