What is Wav2Lip?

Wav2lip is an Ai model that uses audio files to animate lip-sync. The paper is here.

How to animate lip sync using Wav2Lip GitHub code?

1. First download the code from Wav2Lip GitHub repository. Open a dos prompt, in the directory where you want to install, type the command:
>git clone https://github.com/Rudrabha/Wav2Lip.git
2. Download wav2lip_gan.pth and put it under “Wav2Lip\checkpoints” directory.
3. You need to configure a virtual environment to run the code. If you haven’t installed Anaconda3, go to install Anaconda3.
4. Setup a conda environment as the instruction. If you run into problems when you setup, you can download a ready-for-use wav2lip_env at Gumroad (compatible with CUDA 11.8). Unzip it and put it at your anaconda3 installation under “envs” directory.
5. Now prepare your input files. The first one is an audio file for your character to say or sing. Name it as “input_audio.wav” and put it under “assets” directory.
6. The second file is a video file in which the character’s lip movement is clear. Name this file as “input_vid.mp4” and put it under “assets” directory. These two files should have the same time duration.
7. If you run the project in Windows, open “inference.py” with your text editor. In line 277, change to
subprocess.call(command, shell=True)
8. Open an Anaconda Prompt. Run command:
>conda activate wav2lip_env
9. Still in the Anaconda prompt, go to the directory “Wav2Lip” and run command:
>python inference.py –checkpoint_path checkpoints/wav2lip_gan.pth –face assets/input_vid.mp4 –audio assets/input_audio.wav –pads 0 10 0 0 –resize_factor 1
10. When it finishes, the new video is saved at “results\result_voice.mp4” directory.


Face animation using Ai model