Installation on Apple Silicon - vince-io-onsite/stable-diffusion-webui GitHub Wiki
The original source of this wiki is the developer of stable-diffusion-webui: AUTOMATIC1111.
If you need additional assistance or resources, you can always refer to the official documentation on AUTOMATIC1111's GitHub repository: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki
Vince had made some elaborations on this page.
Mac users: Please provide feedback on if these instructions do or don't work for you, and if anything is unclear or you are otherwise still having problems with your install that are not currently mentioned here.
Important notes
Currently most functionality in the web UI works correctly on macOS, with the most notable exceptions being CLIP interrogator and training. Although training does seem to work, it is incredibly slow and consumes an excessive amount of memory. CLIP interrogator can be used but it doesn't work correctly with the GPU acceleration macOS uses so the default configuration will run it entirely via CPU (which is slow).
Most samplers are known to work with the only exception being the PLMS sampler when using the Stable Diffusion 2.0 model. Generated images with GPU acceleration on macOS should usually match or almost match generated images on CPU with the same settings and seed.
Requirements
Hardware requirements
To check if your mac has at least a m1 chip, follow these steps:
- Click on the Apple logo in the top left corner of your screen.
- Click on "About This Mac".
- In the window that appears, look for the "Processor" section. It should say "Apple M1 Chip" or something similar.
If your mac has at least a m1 chip, you are ready to proceed with the installation of Automatic1111. If not, this tutorial will not work for you.
Software requirements
Homebrew
To install Homebrew, you will need to open a terminal window on your mac.
- Open Spotlight by clicking on the magnifying glass icon in the top right corner of your screen, or by pressing the
Command + Space
keyboard shortcut. - Type
Terminal
and press enter. - This will open the Terminal application.
Once you have the terminal open, you can proceed with the installation of Homebrew by running the following command:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
This command will download and run the Homebrew installation script.
Next, to make sure that Homebrew is in your shell path, you need to run the following command:
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"
To test your Homebrew installation, run the following command in your terminal:
brew doctor
This command will check your system for any potential issues that may affect the functionality of Homebrew. If there are any issues, it will give you suggestions on how to fix them. If the command returns "Your system is ready to brew", that means your Homebrew installation is successful and ready to use.
Installation
New install:
- Download the 2 models first because they are the largest files and will take the longest to download.
- Open a new terminal window and run:
brew install cmake protobuf rust [email protected] git wget
- Go to your home directory, create a new directory called AI and navigate to it:
cd ~ mkdir AI cd AI
- Clone the web UI repository by running
git clone https://github.com/vince-io-onsite/stable-diffusion-webui
- Place the models we downloaded in step 1 into
stable-diffusion-webui/models/Stable-diffusion
. - Run the program by entering the command
A Python virtual environment will be created and activated using venv and any remaining missing dependencies will be automatically downloaded and installed.~/AI/stable-diffusion-webui/webui.sh
- Once you see this text in the terminal:
That means that you can go to your browser and enter the URL http://127.0.0.1:7860. And you should see the web UI.Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`.
- It still needs to download a couple of extension data, which happens automatically after generating an image. So you can go ahead and enter a simple prompt like
A red car
and press ongenerate
. You can see a blue loading bar under the button appearing. it will get stuck at ~95%, that's when it is downloading the final extension data. Once it is done, you can generate another image and it should be much faster.
Existing Install:
If you have an existing install of web UI that was created with setup_mac.sh
, delete the run_webui_mac.sh
file and repositories
folder from your stable-diffusion-webui
folder. Then run git pull
to update web UI and then ./webui.sh
to run it.
Downloading Stable Diffusion Models
If you don't have any models to use, Stable Diffusion models can be downloaded from Hugging Face. To download, click on a model and then click on the Files and versions
header. Look for files listed with the ".ckpt" or ".safetensors" extensions, and then click the down arrow to the right of the file size to download them.
Some popular official Stable Diffusion models are:
- Stable DIffusion 1.4 (sd-v1-4.ckpt)
- Stable Diffusion 1.5 (v1-5-pruned-emaonly.ckpt)
- Stable Diffusion 1.5 Inpainting (sd-v1-5-inpainting.ckpt)
Stable Diffusion 2.0 and 2.1 require both a model and a configuration file, and image width & height will need to be set to 768 or higher when generating images:
For the configuration file, hold down the option key on the keyboard and click here to download v2-inference-v.yaml
(it may download as v2-inference-v.yaml.yml
). In Finder select that file then go to the menu and select File
> Get Info
. In the window that appears select the filename and change it to the filename of the model, except with the file extension .yaml
instead of .ckpt
, press return on the keyboard (confirm changing the file extension if prompted), and place it in the same folder as the model (e.g. if you downloaded the 768-v-ema.ckpt
model, rename it to 768-v-ema.yaml
and put it in stable-diffusion-webui/models/Stable-diffusion
along with the model).
Also available is a Stable Diffusion 2.0 depth model (512-depth-ema.ckpt). Download the v2-midas-inference.yaml
configuration file by holding down option on the keyboard and clicking here, then rename it with the .yaml
extension in the same way as mentioned above and put it in stable-diffusion-webui/models/Stable-diffusion
along with the model. Note that this model works at image dimensions of 512 width/height or higher instead of 768.
Troubleshooting
Web UI Won't Start:
If you encounter errors when trying to start the Web UI with ./webui.sh
, try deleting the repositories
and venv
folders from your stable-diffusion-webui
folder and then update web UI with git pull
before running ./webui.sh
again.
Poor Performance:
Currently GPU acceleration on macOS uses a lot of memory. If performance is poor (if it takes more than a minute to generate a 512x512 image with 20 steps with any sampler) first try starting with the --opt-split-attention-v1
command line option (i.e. ./webui.sh --opt-split-attention-v1
) and see if that helps. If that doesn't make much difference, then open the Activity Monitor application located in /Applications/Utilities and check the memory pressure graph under the Memory tab. If memory pressure is being displayed in red when an image is generated, close the web UI process and then add the --medvram
command line option (i.e. ./webui.sh --opt-split-attention-v1 --medvram
). If performance is still poor and memory pressure still red with that option, then instead try --lowvram
(i.e. ./webui.sh --opt-split-attention-v1 --lowvram
). If it still takes more than a few minutes to generate a 512x512 image with 20 steps with with any sampler, then you may need to turn off GPU acceleration. Open webui-user.sh
in Xcode and change #export COMMANDLINE_ARGS=""
to export COMMANDLINE_ARGS="--skip-torch-cuda-test --no-half --use-cpu all"
.
Discussions/Feedback here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/5461