Upgrades to PyTorch 2.0 and provides native DistributedDataParallel GPU Pretraining (1+ GPUs) for all models. Fully documented and cleaned up preprocessing flow (raw Something-Something-v2 data --> pretrained models).
Key Examples:
- General Overview of Preprocessing/Pretraining for Sth-Sth-v2: examples/pretrain/README.md
- PyTorch DDP Pretraining Script (invoke via
torchrun
): examples/pretrain/pretrain.py
What's Changed
Full Changelog: v1.0.0...v1.1.0