Skip to content

Riccorl/Super-SloMo-tf2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Super Slo Mo TF2

tensorflow Code style: black

Tensorflow 2 implementation of "Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation" by Jiang H., Sun D., Jampani V., Yang M., Learned-Miller E. and Kautz J.

Setup

The code is based on Tensorflow 2.1. To install all the needed dependency, run

Conda
conda env create -f environment.yml
source activate super-slomo
Pip
python3 -m venv super-slomo
source super-slomo/bin/activate
pip install -r requirements.txt

Inference

You can download the pre-trained model here. This model is trained for 259 epochs on the adobe240fps dataset. It uses the single frame prediction mode.

To generate a slomo video run:

python super-slomo/inference.py path/to/source/video path/to/slomo/video --model path/to/checkpoint --n_frames 20 --fps 480

Train

Data Extraction

Before the training phase, the frames must be extracted from the original video sources. This code uses the adobe240fps dataset to train the model. To extract frames, run the following command:

python super-slomo/frame_extraction.py path/to/dataset path/to/destination 

It will use ffmepg to extract the frames and put them in the destination folder, grouped in folders of 12 consecutive frames. If ffmpeg is not available, it falls back to slower opencv.

For info run:

python super-slomo/frame_extraction.py -h

Train the model

You can start to train the model by running:

python super-slomo/train.py path/to/frames --model path/to/checkpoints --epochs 100 --batch-size 32

If the model directory contains a checkpoint, the model will continue to train from that epoch until the total number of epochs provided is reached

You can also visualize the training with tensorboard, using the following command

tensorboard --logdir log --port 6006

and go to https://localhost:6006.

For info run:

python super-slomo/train.py -h
Multi-frame model

The model above predicts only one frame at time, due to hardware limitations. If you can access to powerful GPUs, you can predict more frame with a single sample (like in the original paper). To start, clone the multi-frame branch

git clone --branch multi-frame https://github.com/Riccorl/Super-SloMo-tf2.git 

then, follow the instructions above to setup and extract the frames. The training command has one additional parameter --frames to control the number of frames to predict:

python super-slomo/train.py path/to/frames --model path/to/checkpoints --epochs 100 --batch-size 32 --frames 9

Useful links

Dataset links

Random notes

References