Custom inpainting model - Jack000/glid-3-xl-stable GitHub Wiki

Diffusion models are capable of inpainting by default, but in general this type of inpainting will have the problem of exposure bias - sometimes the output will completely ignore the image context and purely use the text context.

This custom-trained inpainting/outpainting model is based on SD 1.4, finetuned for an additional 100k steps at a batch size of 256.

Code and example CLI commands for inpainting can be found at the main readme: https://github.com/Jack000/glid-3-xl-stable

source image:

guy

photo by Thái An via unsplash: https://unsplash.com/photos/zTaHFYuQPZM

A comparison between trained and untrained inpainting methods - non-cherrypicked results:

prompt: "a cybernetic cyberpunk man"

automatic1111 inpainting (base SD 1.4 inpainting, no additional training. 0.75 denoising strength):

inpaint-auto

automatic1111 inpainting (base SD 1.4 inpainting, no additional training. 1.0 denoising strength):

inpaint-auto-1

glid-3 inpainting (custom-trained model, 1.0 denoising strength):

inpaint

outpainting prompt: "a man wearing a sharp suit"

automatic1111 outpainting (0.75 denoising strength):

outpaint-auto

glid-3 outpainting (1.0 denoising strength):

outpaint

note: for best outpainting results, erase hard edges with the brush tool