UI Model - Zuntan03/EasyWan22 GitHub Wiki
EnhanceMotion
Improves animation speed. For FastMix.
Boost1stStep
Enhances motion. For FastMix.
SelectModel
Switch between EasyWan22 standard FastMix model and base model.
TextEncoder
Kijai: Can be used stably.Nag: Can use negative prompts even withCfg 1high-speed generation.- During use,
CFGmust be1, resulting in different outcomes from other TextEncoders for better or worse.
- During use,
Native: Higher precision thanKijaiwithfp8_scaled, but prone to VRAM-related troubles.- Please use with sufficient VRAM margin.
TorchCompile
- Disable when older GPUs like Geforce RTX 20x0 and earlier cannot properly utilize compilation optimization.
- In that case, you may also need to set
attention_modeinModelLoadertospda.
- In that case, you may also need to set
ModelLoader
Lora(1~5)-(High|Low),Lora(1~5)-(High|Low)Strength- Set manually loaded LoRA and its strength.
- Please input so-called trigger words in
PositiveInput.
BlocksToSwap- If you don't understand VRAM consumption, keep it at
40. - You can generate videos faster by reducing this value.
- Whether to use surplus VRAM for high-precision models, increasing video resolution/duration, or speeding up generation is up to your preference.
offload_img_emb,offload_txt_emb,prefetch_blocksare also swap-related settings.- Setting
prefetch_blocksto1may speed up some environments. It slowed down in my environment.
- Setting
- If you don't understand VRAM consumption, keep it at
base_precision- Changing from
fp16_fasttofp16will slow down but increase precision.
- Changing from
quantization- When using fp8 models instead of GGUF, match the model's quantization.
load_device- If you have sufficient VRAM, loading to
main_devicemight speed things up.
- If you have sufficient VRAM, loading to
attention_mode- Change from
sageattntosdpawhen older GPUs like Geforce RTX 20x0 and earlier cannot use SageAttention.
- Change from
FastMix-(High|Low)- You can download and use FastMix models with different precision using
Download/diffusion_models/FastMix/Wan22-I2V-FastMix_*.bat.
- You can download and use FastMix models with different precision using
Base-(High|Low)- You can download and use Base models with different precision using
Download/diffusion_models/Base/Wan2.2-I2V-A14B-*.bat.
- You can download and use Base models with different precision using
FAQ
Want to use base model instead of FastMix
You can use it by selecting Base in SelectModel under Model.
More detailed settings are also possible in the Base-Sampler subgraph at the bottom left.
In my environment, parameter adjustments haven't been fully refined, and compatibility with EndImage hasn't been improved.
If you have recommended settings that can achieve good results with any combination of features, regardless of LoRA or EndImage presence, please let me know.