ONNX Olive - DrakeRichards/stable-diffusion-webui GitHub Wiki

ONNX Runtime (experimental support)

SD.Next includes experimental support for ONNX Runtime.

How to

You should switch branch to olive.

Change Execution backend to diffusers and Diffusers pipeline to ONNX Stable Diffusion on the System tab.

Performance

The performance depends on the execution provider.

Execution Providers

ONNX Olive GPU CPU
CPUExecutionProvider
DmlExecutionProvider
CUDAExecutionProvider
ROCMExecutionProvider 🚧
OpenVINOExecutionProvider

CPUExecutionProvider

Not recommended.

Enabled by default.

DmlExecutionProvider

Mostly recommended if you want Olive.

You can select DmlExecutionProvider by installing onnxruntime-directml.

CUDAExecutionProvider

Not tested with Olive.

You can select CUDAExecutionProvider by installing onnxruntime-gpu.

ROCMExecutionProvider

Olive for ROCM ep is working in progress.

You can select ROCMExecutionProvider by installing onnxruntime-training built with ROCm from here.

OpenVINOExecutionProvider

Not tested.

You can select OpenVINOExecutionProvider by installing openvino and onnxruntime-openvino.

TODO

  • ONNX Stable Diffusion
  • ONNX Stable Diffusion Olive optimization
  • ONNX Stable Diffusion Img2Img
  • ONNX Stable Diffusion Inpaint
  • Reduce memory usage on optimization process
  • ONNX Stable Diffusion XL
  • ONNX Stable Diffusion XL Olive optimization
  • ONNX Stable Diffusion XL Img2Img
  • ONNX Stable Diffusion XL with Refiner
  • ONNX Stable Diffusion XL Turbo
  • ONNX Stable Diffusion XL Turbo Olive Optimization
  • ONNX Stable Diffusion XL Turbo Img2Img
  • Test execution providers.
  • Test samplers.
  • Others (LCM?, optimization on GPU?)
  • Merge into dev.

FAQ

Olive (experimental support)

Olive is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation. (from pypi)

This feature is EXPERIMENTAL. If you run this, your existing installation may be broken. Run it in a new installation or in a new virtual environment.

How to

You should switch branch to olive.

$ git switch olive
$ git pull

Go to System tab → Diffusers Settings and enable Optimize ONNX pipeline using Olive before every generation.

Guide on YouTube:

Guide

From checkpoint

Model optimization occurs automatically before generation.

Target models can be .safetensors, .ckpt, Diffusers and the optimization takes time depending on your system and execution provider.

The optimized models are automatically cached and used later to create images of the same size (height and width).

From Huggingface

If your system memory is not enough to optimize model or you don't want to waste your time to optimize the model yourself, you can download optimized model from Huggingface.

Go to ModelsHuggingface tab and download optimized model.

There's an optimized version of runwayml/stable-diffusion-v1-5.

Guide on YouTube:

Guide

Performance

Property Value
Prompt a castle, best quality
Negative Prompt worst quality
Sampler Euler
Sampling Steps 20
Device RX 7900 XTX 24GB
Version olive-ai(0.3.3) onnxruntime-directml(1.16.1) ROCm(5.6) torch(olive: 1.13.1, rocm: 2.1.0)
Model runwayml/stable-diffusion-v1-5 (ROCm), lshqqytiger/stable-diffusion-v1-5-olive (Olive)
Precision fp16
Token Merging Olive(0, not supported) ROCm(0.5)
Olive with DmlExecutionProvider ROCm
Olive ROCm

Pros and Cons

Pros

  • The generation is faster.
  • Uses less graphics memory.

Cons

  • Optimization is required for every models and image sizes.
  • Some features are unavailable.

FAQ

My execution provider does not show up in my settings.

Run this command and try again:

(venv) $ pip uninstall onnxruntime onnxruntime-... -y

An error occurred about scaled_dot_product_attention.

Downgrade torch to 1.13.1 and prevent auto-upgrading.

(venv) $ pip uninstall torch torchvision torch-directml
(venv) $ pip install torch==1.13.1 torchvision==0.14.1 torch-directml==0.1.13.1.dev230413
(venv) $ ./webui.(bat or sh) --use-directml --skip-requirements --safe
⚠️ **GitHub.com Fallback** ⚠️