Stable Diffusion - jianwu13/PCTResearch GitHub Wiki
- Prerequisites
- Make conda environment
- Download & Installation
- Operation Confirmation
- Installing stable-diffusion-webui
- Corresponding Windows System Environments
- Trouble shooting
Official site | Installation Guide |
---|---|
Git for Windows | Git for Windows Installation |
TortoiseGit(optional) | TortoiseGit Installation |
Warning
python3.10.6 was recommended, nevertheless 2023/09/03 Operation has been confirmed in python3.11.4
Important
to activate conda environment, you must use Anaconda Prompt.
Here we use environment name 'diffusion_env'.
E:\> conda create --name diffusion_env python==3.10.6
Location of envs are
C:\Users\%USERNAME%\anaconda3\envs
activate env
E:\> conda activate diffusion_env
for reference
(diffusion_env) E:> conda list
# packages in environment at C:\Users\kan\anaconda3\envs\diffusion_env:
#
# Name Version Build Channel
bzip2 1.0.8 he774522_0
ca-certificates 2023.05.30 haa95532_0
libffi 3.4.4 hd77b12b_0
openssl 1.1.1v h2bbff1b_0
pip 23.2.1 py310haa95532_0
python 3.10.0 h96c0403_3
setuptools 68.0.0 py310haa95532_0
sqlite 3.41.2 h2bbff1b_0
tk 8.6.12 h2bbff1b_0
tzdata 2023c h04d1e81_0
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
wheel 0.38.4 py310haa95532_0
xz 5.4.2 h8cc25b3_0
zlib 1.2.13 h8cc25b3_0
(diffusion_env) E:\stable_diffusion>pip list
Package Version
---------- -------
pip 23.2.1
setuptools 68.0.0
wheel 0.38.4
and maybe followings are needed.
conda config --append channels conda-forge
conda config --append channels nvidia
(diffusion_env) E:\> mkdir stable_diffusion
(diffusion_env) E:\> cd stable_diffusion
(diffusion_env) E:\stable_diffusion> git clone https://github.com/Stability-AI/stablediffusion.git
(diffusion_env) E:\stable_diffusion> cd stablediffusion
(diffusion_env) E:\stable_diffusion\stablediffusion> pip install -r requirements.txt
> [!WARNING]
> if you got following error
> ```
> ...
> running build_rust
> error: can't find Rust compiler
> ...
> ```
> You need to Install Rust from
> [rust-lang.org](https://www.rust-lang.org/tools/install)
maybe 11.8 is recommended
and execute following command
(diffusion_env) E:\stable_diffusion\stablediffusion> conda install cudatoolkit
There are two method.
- Method 1: Installing PyTorch with Conda
- Method 2: Installing PyTorch with Pip
Both methods are straightforward and can be done.
pytorch.org says Anaconda is our recommended package manager since it installs all dependencies.
But we choice pip install.
It says
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121
But we execute more precisely,
(diffusion_env) E:\stable_diffusion\stablediffusion> pip install --pre torch==2.1.0.dev20230817+cu121 torchvision==0.16.0.dev20230818+cu121 torchaudio==2.1.0.dev20230818+cu121 --index-url https://download.pytorch.org/whl/nightly/cu121
Important
This is one of the most important checkpoints. At this point torch.cuda should be available. Check it!
(diffusion_env) E:\stable_diffusion\stablediffusion> python
>>> import torch
>>> print(torch.__version__)
2.1.0.dev20230817+cu121
>>> torch.cuda.is_available()
True
>>> exit()
for reference
(diffusion_env) E:\stable_diffusion\stablediffusion> conda list | findstr torch
open-clip-torch 2.7.0 pypi_0 pypi
pytorch-lightning 1.4.2 pypi_0 pypi
torch 2.1.0.dev20230817+cu121 pypi_0 pypi
torchaudio 2.1.0.dev20230818+cu121 pypi_0 pypi
torchmetrics 0.6.0 pypi_0 pypi
torchvision 0.16.0.dev20230818+cu121 pypi_0 pypi
(diffusion_env) E:\stable_diffusion\stablediffusion> pip install git+https://github.com/facebookresearch/xformers
Successfully installed xformers-0.0.22+cfad52d.d2023090
Upon successful installation, the code will automatically default to memory efficient attention for the self- and cross-attention layers in the U-Net and autoencoder.
(diffusion_env) E:\stable_diffusion\stablediffusion> pip install git+https://github.com/huggingface/transformers
Successfully uninstalled transformers-4.19.2
Successfully installed safetensors-0.3.3 transformers-4.33.0.dev0
Set Windows environment XFORMERS_FORCE_DISABLE_TRITON=1
I Installied from vcpkg but xformers can't recognize triton.
download v2-1_768-ema-pruned.ckpt from stabilityai/stable-diffusion-2-1
(diffusion_env) E:\stable_diffusion\stablediffusion> python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt "E:\difwork\SD2.1-v\v2-1_768-ema-pruned.ckpt" --config "E:\stable_diffusion\stablediffusion\configs\stable-diffusion\v2-inference-v.yaml" --H 768 --W 768 --device cuda
Note
txt2img.py makes 9 images.So it takes so much time and large memory.Trouble shooting of Out of memory.If you can't fix this error,skip this.
we need to download SD2.1-base\v2-1_512-ema-pruned.ckpt
(diffusion_env) E:\stable_diffusion\stablediffusion> python scripts/img2img.py --prompt "To create a commentary on modern technology and classical art, use an art style like Pop Art that often addresses cultural themes. Clearly mention the juxtaposition of elements. Pop Art painting of a modern smartphone with classic art pieces appearing on the screen." --init-img "E:\difwork\SD2.1-base\sample.png" --strength 0.8 --ckpt "E:\difwork\SD2.1-base\v2-1_512-ema-pruned.ckpt"
outputs/img2img-samples
Enjoy.
(diffusion_env) E:\stable_diffusion\stablediffusion> cd ..
(diffusion_env) E:\stable_diffusion> git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
(diffusion_env) E:\stable_diffusion> cd stable-diffusion-webui
Edit the webui-user.bat to include --xformers (--reinstall-xformers) for the command line arguments as well as the desired package version. Then run the webui-user.bat as normal.
webui-user.bat:
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set XFORMERS_PACKAGE=xformers==0.0.20
set COMMANDLINE_ARGS=--xformers
call webui.bat
Important
We need to install xformers for webui.And It should be 0.0.20 not 0.0.22 at current condition.
then execute webui-user.bat
(diffusion_env) E:\stable_diffusion\stable-diffusion-webui> webui-user.bat
Running on local URL: http://127.0.0.1:7860
Applying attention optimization: xformers... done.
...
** Gooooal!!! **
for reference
(diffusion_env) E:\webui\stable-diffusion-webui>python
>>> import torch
>>> print(torch.__version__)
2.1.0.dev20230817+cu121
>>> torch.cuda.is_available()
True
>>> exit()
(diffusion_env) E:\webui\stable-diffusion-webui>conda list | findstr torch
open-clip-torch 2.7.0 pypi_0 pypi
pytorch-lightning 1.4.2 pypi_0 pypi
torch 2.1.0.dev20230817+cu121 pypi_0 pypi
torchaudio 2.1.0.dev20230818+cu121 pypi_0 pypi
torchmetrics 0.6.0 pypi_0 pypi
torchvision 0.16.0.dev20230818+cu121 pypi_0 pypi
followings are checked by programs
CUDA_HOME = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2
CUDA_PATH = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8
PYTORCH_CUDA_ALLOC_CONF = garbage_collection_threshold:0.6,max_split_size_mb:4096
PYTORCH_NO_CUDA_MEMORY_CACHING = 1
XFORMERS_FORCE_DISABLE_TRITON = 1
maybe not correnponding
CUDA_PATH_V11_8 = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8
CUDA_PATH_V12_2 = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2
If you got error like following,
torch.cuda.OutOfMemoryError: CUDA out of memory.
edit windows system environment variables
set PYTORCH_NO_CUDA_MEMORY_CACHING=1
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:32
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:1024
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:4096
Note
About max_split_size_mb
max_split_size_mb prevents the native allocator from splitting blocks larger than this size (in MB).
Default value is unlimited.So easily cause Out of Memory.
So,set max_split_size_mb to a maximum integer value smaller than the minimum size of the video memory request when Out of Memory occurs.
That can ensure the feasibility of running large images while maximizing performance.
If you change some environments and use the same directory, you should delete the venv folder completely.
stable-diffusion-webui\venv
webui will create new venv again.
.ckpt, .safetensors, .pt, .bin, .pth
There are many types of extension files that correspond to webui.
You must place these files in the correct locations.
Otherwise, it will cause many warnings or even errors.
See Webui-Tips