UI Model - Zuntan03/EasyWan22 GitHub Wiki
EnhanceMotion
Improves animation speed. For FastMix use.
SelectModel
Switches between the EasyWan22 standard FastMix
model and base models.
Base models do not have CFG
fixed at 1
, so you need to select something other than Nag
in the adjacent TextEncoder
.
TextEncoder
Kijai
: Can be used stably.Nag
: Allows negative prompts even withCfg 1
high-speed generation.- However, this produces different results from other TextEncoders, for better or worse.
Native
: More precise thanKijai
withfp8_scaled
, but VRAM-related issues are common.- Use with ample VRAM headroom.
TorchCompile
- Disable when older GPUs like Geforce RTX 20x0 or earlier cannot properly use compilation optimization.
- In such cases, you may also need to set
attention_mode
tospda
inModelLoader
.
- In such cases, you may also need to set
ModelLoader
-
Lora(1~5)-(High|Low)
,Lora(1~5)-(High|Low)Strength
- Set the LoRA to load manually and its strength.
- Enter so-called trigger words in
PositiveInput
.
-
BlocksToSwap
- If you don't understand VRAM usage, leave it at
40
. - Reducing this value allows faster video generation.
- Choose whether to use excess VRAM for high-precision models, increased video resolution/duration, or faster generation.
offload_img_emb
,offload_txt_emb
,prefetch_blocks
are also swap-related settings.- Some environments may become faster with
prefetch_blocks
set to1
. It was slower on my setup.
- Some environments may become faster with
- If you don't understand VRAM usage, leave it at
-
base_precision
- Changing from
fp16_fast
tofp16
slows down speed but improves accuracy.
- Changing from
-
quantization
- When using fp8 models instead of GGUF, match the model's quantization.
-
load_device
- If you have ample VRAM, loading to
main_device
might provide speed improvements.
- If you have ample VRAM, loading to
-
attention_mode
- For older GPUs like Geforce RTX 20x0 or earlier that cannot use SageAttention, change from
sageattn
tosdpa
.
- For older GPUs like Geforce RTX 20x0 or earlier that cannot use SageAttention, change from
-
FastMix-(High|Low)
- Download and use FastMix models with different precisions using
Download/diffusion_models/FastMix/Wan22-I2V-FastMix_*.bat
.
- Download and use FastMix models with different precisions using
-
Base-(High|Low)
- Download and use Base models with different precisions using
Download/diffusion_models/Base/Wan2.2-I2V-A14B-*.bat
.
- Download and use Base models with different precisions using
FAQ
I want to use base models instead of FastMix
Select Base
in SelectModel
under Model
, and choose Kijai
in the adjacent TextEncoder
.
More detailed settings are available in the Base-Sampler
subgraph at the bottom left.
I haven't fully tuned the parameters locally and haven't improved compatibility with EndImage. If you have recommended settings that work well with any combination of features, regardless of LoRA or EndImage presence, please let me know.