Custom Scripts - openvinotoolkit/stable-diffusion-webui GitHub Wiki

Installing and Using Custom Scripts

To install custom scripts, place them into the scripts directory and click the Reload custom script button at the bottom in the settings tab. Custom scripts will appear in the lower-left dropdown menu on the txt2img and img2img tabs after being installed. Below are some notable custom scripts created by Web UI users:

Accelerate with OpenVINO

Launch the OpenVINO custom script by selecting "Accelerate with OpenVINO" in the dropdown menu.

Example Screenshot: (Click to expand:)

With OpenVINO custom scripts, below options can be configured:

  • Config files: A model checkpoint needs to be associated with a corresponding configuration file. Typically, the config files are automatically downloaded by diffusers library. However, if the origin of model checkpoints is different or if it is a custom checkpoint, the corresponding config file (in the '.yaml' format) needs to be placed in the 'configs' directory in the WebUI root.
  • Device: With OpenVINO, the stable diffusion WebUI can be accelerated on Intel CPUs or GPUs. All the OpenVINO supported devices are listed in the Dropdown. Users can select the device of their choice for acceleration. When the user selects a new device, it is recommended to exclude the first warmup iteration from performance measurements because the model gets recompiled for the target hardware.
  • Sampling method: OpenVINO acceleration uses Hugging Face Diffusers library. We use equivalent sampling methods from diffusers and a few popular sampling methods are validated. We recommend checking the "Override the sampling selection from the main UI" as some of the sampling methods from the main UI are not supported yet and will fallback to 'Euler a' sampling method.
  • Caching: OpenVINO supports caching of the compiled models for subsequent uses of the same model. This reduces the warm up time significantly. It is recommended to enable this setting. The compiled models are saved in "cache" directory in the WebUI root. The user can delete the files that are no longer needed in order to save disk space. We don't delete the files automatically in order to avoid deleting files that may be used by the user at a later point of time.

Note: OpenVINO applies optimizations and compiles the model for optimized performance. The compilation time may take between 1-2 minutes to run a warm-up iteration, depending on the system configuration. The model may be recompiled when the resolution, batch size, or device change. The model may also be recompiled by PyTorch when certain samplers such as DPM++ or Karras are selected. It is recommended to exclude the warm up time for any performance measurement. In future, we will be adding support dynamic shapes and warm up run for resolution and batch size changes will not be required.

Example Screenshot of the configuration: (Click to expand:) ov_custum_script_screenshot

txt2img2img

https://github.com/ThereforeGames/txt2img2img

Greatly improve the editability of any character/subject while retaining their likeness. The main motivation for this script is improving the editability of embeddings created through Textual Inversion.

(be careful with cloning as it has a bit of venv checked in)

Example: (Click to expand:)

txt2mask

https://github.com/ThereforeGames/txt2mask

Allows you to specify an inpainting mask with text, as opposed to the brush.

Example: (Click to expand:)

Mask drawing UI

https://github.com/dfaker/stable-diffusion-webui-cv2-external-masking-script

Provides a local popup window powered by CV2 that allows addition of a mask before processing.

Example: (Click to expand:)

Img2img Video

https://github.com/memes-forever/Stable-diffusion-webui-video

Using img2img, generates pictures one after another.

Advanced Seed Blending

https://github.com/amotile/stable-diffusion-backend/tree/master/src/process/implementations/automatic1111_scripts

This script allows you to base the initial noise on multiple weighted seeds.

Ex. seed1:2, seed2:1, seed3:1

The weights are normalized so you can use bigger once like above, or you can do floating point numbers:

Ex. seed1:0.5, seed2:0.25, seed3:0.25

Prompt Blending

https://github.com/amotile/stable-diffusion-backend/tree/master/src/process/implementations/automatic1111_scripts

This script allows you to combine multiple weighted prompts together by mathematically combining their textual embeddings before generating the image.

Ex.

Crystal containing elemental {fire|ice}

It supports nested definitions so you can do this as well:

Crystal containing elemental {{fire:5|ice}|earth}

Animator

https://github.com/Animator-Anon/Animator

A basic img2img script that will dump frames and build a video file. Suitable for creating interesting zoom in warping movies, but not too much else at this time.

Parameter Sequencer

https://github.com/rewbs/sd-parseq

Generate videos with tight control and flexible interpolation over many Stable Diffusion parameters (such as seed, scale, prompt weights, denoising strength...), as well as input processing parameter (such as zoom, pan, 3D rotation...)

Alternate Noise Schedules

https://gist.github.com/dfaker/f88aa62e3a14b559fe4e5f6b345db664

Uses alternate generators for the sampler's sigma schedule.

Allows access to Karras, Exponential and Variance Preserving schedules from crowsonkb/k-diffusion along with their parameters.

Vid2Vid

https://github.com/Filarius/stable-diffusion-webui/blob/master/scripts/vid2vid.py

From real video, img2img the frames and stitch them together. Does not unpack frames to hard drive.

Txt2VectorGraphics

https://github.com/GeorgLegato/Txt2Vectorgraphics

Create custom, scaleable icons from your prompts as SVG or PDF.

Example: (Click to expand:)
prompt PNG SVG
Happy Einstein
Mountainbike Downhill
coffe mug in shape of a heart
Headphones

Loopback and Superimpose

https://github.com/DiceOwl/StableDiffusionStuff

https://github.com/DiceOwl/StableDiffusionStuff/blob/main/loopback_superimpose.py

Mixes output of img2img with original input image at strength alpha. The result is fed into img2img again (at loop>=2), and this procedure repeats. Tends to sharpen the image, improve consistency, reduce creativity and reduce fine detail.

Interpolate

https://github.com/DiceOwl/StableDiffusionStuff

https://github.com/DiceOwl/StableDiffusionStuff/blob/main/interpolate.py

An img2img script to produce in-between images. Allows two input images for interpolation. More features shown in the readme.

Run n times

https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f

Run n times with random seed.

Advanced Loopback

https://github.com/Extraltodeus/advanced-loopback-for-sd-webui

Dynamic zoom loopback with parameters variations and prompt switching amongst other features!

prompt-morph

https://github.com/feffy380/prompt-morph

Generate morph sequences with Stable Diffusion. Interpolate between two or more prompts and create an image at each step.

Uses the new AND keyword and can optionally export the sequence as a video.

prompt interpolation

https://github.com/EugeoSynthesisThirtyTwo/prompt-interpolation-script-for-sd-webui

With this script, you can interpolate between two prompts (using the "AND" keyword), generate as many images as you want. You can also generate a gif with the result. Works for both txt2img and img2img.

Example: (Click to expand:)

gif

Asymmetric Tiling

https://github.com/tjm35/asymmetric-tiling-sd-webui/

Control horizontal/vertical seamless tiling independently of each other.

Example: (Click to expand:)

Force Symmetry

https://gist.github.com/missionfloyd/69e5a5264ad09ccaab52355b45e7c08f

see https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2441

applies symmetry to the image every n steps and sends the result further to img2img.

Example: (Click to expand:)

txt2palette

https://github.com/1ort/txt2palette

Generate palettes by text description. This script takes the generated images and converts them into color palettes.

Example: (Click to expand:)

XYZ Plot Script

https://github.com/xrpgame/xyz_plot_script

Generates an .html file to interactively browse the imageset. Use the scroll wheel or arrow keys to move in the Z dimension.

Example: (Click to expand:)

Expanded-XY-grid

https://github.com/0xALIVEBEEF/Expanded-XY-grid

Custom script for AUTOMATIC1111's stable-diffusion-webui that adds more features to the standard xy grid:

  • Multitool: Allows multiple parameters in one axis, theoretically allows unlimited parameters to be adjusted in one xy grid

  • Customizable prompt matrix

  • Group files in a directory

  • S/R Placeholder - replace a placeholder value (the first value in the list of parameters) with desired values.

  • Add PNGinfo to grid image

Example: (Click to expand:)

Example images: Prompt: "darth vader riding a bicycle, modifier"; X: Multitool: "Prompt S/R: bicycle, motorcycle | CFG scale: 7.5, 10 | Prompt S/R Placeholder: modifier, 4k, artstation"; Y: Multitool: "Sampler: Euler, Euler a | Steps: 20, 50"

Embedding to PNG

https://github.com/dfaker/embedding-to-png-script

Converts existing embeddings to the shareable image versions.

Example: (Click to expand:)

Alpha Canvas

https://github.com/TKoestlerx/sdexperiments

Outpaint a region. Infinite outpainting concept, used the two existing outpainting scripts from the AUTOMATIC1111 repo as a basis.

Example: (Click to expand:)

Random grid

https://github.com/lilly1987/AI-WEBUI-scripts-Random

Randomly enter xy grid values.

Example: (Click to expand:)

Basic logic is same as x/y plot, only internally, x type is fixed as step, and type y is fixed as cfg. Generates x values as many as the number of step counts (10) within the range of step1|2 values (10-30) Generates x values as many as the number of cfg counts (10) within the range of cfg1|2 values (6-15) Even if you put the 1|2 range cap upside down, it will automatically change it. In the case of the cfg value, it is treated as an int type and the decimal value is not read.

Random

https://github.com/lilly1987/AI-WEBUI-scripts-Random

Repeat a simple number of times without a grid.

Example: (Click to expand:)

Stable Diffusion Aesthetic Scorer

https://github.com/grexzen/SD-Chad

Rates your images.

img2tiles

https://github.com/arcanite24/img2tiles

generate tiles from a base image. Based on SD upscale script.

Example: (Click to expand:)

img2mosiac

https://github.com/1ort/img2mosaic

Generate mosaics from images. The script cuts the image into tiles and processes each tile separately. The size of each tile is chosen randomly.

Example: (Click to expand:)

Test my prompt

https://github.com/Extraltodeus/test_my_prompt

Have you ever used a very long prompt full of words that you are not sure have any actual impact on your image? Did you lose the courage to try to remove them one by one to test if their effects are worthy of your pwescious GPU?

WELL now you don't need any courage as this script has been MADE FOR YOU!

It generates as many images as there are words in your prompt (you can select the separator of course).

Example: (Click to expand:)

Here the prompt is simply : "banana, on fire, snow" and so as you can see it has generated each image without each description in it.

You can also test your negative prompt.

Pixel Art

https://github.com/C10udburst/stable-diffusion-webui-scripts

Simple script which resizes images by a variable amount, also converts image to use a color palette of a given size.

Example: (Click to expand:)
Disabled Enabled x8, no resize back, no color palette Enabled x8, no color palette Enabled x8, 16 color palette
preview preview preview preview

model used

japanese pagoda with blossoming cherry trees, full body game asset, in pixelsprite style
Steps: 20, Sampler: DDIM, CFG scale: 7, Seed: 4288895889, Size: 512x512, Model hash: 916ea38c, Batch size: 4

Scripts by FartyPants

https://github.com/FartyPants/sd_web_ui_scripts

Hallucinate

  • swaps negative and positive prompts

Mr. Negativity

  • more advanced script that swaps negative and positive tokens depending on Mr. negativity rage

gif2gif

https://github.com/LonicaMewinsky/gif2gif

The purpose of this script is to accept an animated gif as input, process frames as img2img typically would, and recombine them back into an animated gif. Not intended to have extensive functionality. Referenced code from prompts_from_file.

Post-Face-Restore-Again

https://github.com/butaixianran/Stable-Diffusion-Webui-Post-Face-Restore-Again

Run face restore twice in one go, from extras tab.

Infinite Zoom

https://github.com/coolzilj/infinite-zoom

Generate Zoom in/out videos, with outpainting, as a custom script for inpaint mode in img2img tab.

ImageReward Scorer

https://github.com/THUDM/ImageReward#integration-into-stable-diffusion-web-ui

An image scorer based on ImageReward, the first general-purpose text-to-image human preference RM, which is trained on in total 137k pairs of expert comparisons.

Features developed to date (2023-04-24) include: (click to expand demo video)

1. Score generated images and append to image information
score-and-append-to-info.mp4
2. Automatically filter out images with low scores
filter-out-images-with-low-scores.mp4

For details including installing and feature-specific usage, check the script introduction.

Saving steps of the sampling process

(Example Script)
This script will save steps of the sampling process to a directory.

import os.path

import modules.scripts as scripts
import gradio as gr

from modules import shared, sd_samplers_common
from modules.processing import Processed, process_images

class Script(scripts.Script):
    def title(self):
        return "Save steps of the sampling process to files"

    def ui(self, is_img2img):
        path = gr.Textbox(label="Save images to path", placeholder="Enter folder path here. Defaults to webui's root folder")
        return [path]

    def run(self, p, path):
        if not os.path.exists(path):
            os.makedirs(path)
        index = [0]

        def store_latent(x):
            image = shared.state.current_image = sd_samplers_common.sample_to_image(x)
            image.save(os.path.join(path, f"sample-{index[0]:05}.png"))
            index[0] += 1
            fun(x)

        fun = sd_samplers_common.store_latent
        sd_samplers_common.store_latent = store_latent

        try:
            proc = process_images(p)
        finally:
            sd_samplers_common.store_latent = fun

        return Processed(p, proc.images, p.seed, "")
⚠️ **GitHub.com Fallback** ⚠️