Paper Models Inference - YuvalNirkin/fsgan GitHub Wiki

First make sure you have all the latest version models in fsgan/weights directory. Fill out this form, place download_fsgan_models.py in the root directory of the fsgan repository, and run:

python download_fsgan_models.py

Checkout the v1 branch:

git checkout v1

Video to Video Face Swapping

Single source and target:

python face_swap_video2video.py ../docs/examples/shinzo_abe.mp4 -t ../docs/examples/conan_obrien.mp4 -o .

Batch version
All possible pairs in a directory:

python face_swap_video2video_batch.py <input directory> -o <output directory>

Source and target directories:

python face_swap_video2video_batch.py <source directory> <target directory> -o <output directory>

Root directory and a pairs list file containing two relative paths to videos in each row:

python face_swap_video2video_batch.py <root directory> <pairs list file> -o <output directory>
  • Set --verbose to 1 to generate the ablation figure or set --verbose to 2 to output the complete debug information.
  • Set --output_crop to output cropped frames around the head.
  • Set --reverse_output to reverse the output name to be <target>_<source>.

Image to Video Face Swapping

python face_swap_image2video.py <source image> -t <target video> -o <output directory>

Video to Images Face Swapping

python face_swap_video2images.py <source video> -t <target images directory> -o <output directory>

Images to Images Face Swapping

python face_swap_images2images.py <source images directory> -t <target images directory> -o <output directory>

Image to Video Reenactment (Pose and Expression)

Simple version (used to generate the qualitative face reenactment figures):

python reenactment.py <source image> -t <target video> -o <output directory>

Recurrent version (used to generate the reenactment limitations figure):

python reenactment_stepwise.py <source image> -t <target video> -o <output directory>

Video to Video Reenactment (Expression Only)

Single source and target:

python expression_reenactment_video2video.py <source video> -t <target video> -o <output directory>

Batch version
All possible pairs in a directory:

python expression_reenactment_video2video_batch.py <input directory> -o <output directory>

Source and target directories:

python expression_reenactment_video2video_batch.py <source directory> <target directory> -o <output directory>

Root directory and a pairs list file containing two relative paths to videos in each row:

python expression_reenactment_video2video_batch.py <root directory> <pairs list file> -o <output directory>
⚠️ **GitHub.com Fallback** ⚠️