using the data : audio:- audio and video:-- video
The model was used from the repository:repository
As mentioned in the given repository linked, we used both the Wav2Lip.pth as well as Wav2Lip_gan.pth,which provided the results outpu_video.mp4 and output_video1.mp4 respectively.The inputs for the code are the given in the above audio and video links which are preprocessed and run through the python code in the file audio_video.py and audio_video1.py.
->audio_video.py uses the Wav2Lip_gan.pth giving the output_video1.mp4 as the result.
->audio_video1.py uses the Wav2Lip.pth giving the output_video.mp4 as the result.