Releases: siddk/voltron-robotics
Official v1.1.0 Release - PyTorch GPU DDP Pretraining & Cleanup
Upgrades to PyTorch 2.0 and provides native DistributedDataParallel GPU Pretraining (1+ GPUs) for all models. Fully documented and cleaned up preprocessing flow (raw Something-Something-v2 data --> pretrained models).
Key Examples:
- General Overview of Preprocessing/Pretraining for Sth-Sth-v2: examples/pretrain/README.md
- PyTorch DDP Pretraining Script (invoke via
torchrun
): examples/pretrain/pretrain.py
What's Changed
Full Changelog: v1.0.0...v1.1.0
Official v1.0.0 Release - Preprocessing & XLA Pretraining Pipeline
Completes the preprocessing -> pretraining -> model usage pipeline, by adding the full reference scripts for preprocessing the entire Sth-Sth-v2 dataset (for pretraining MVP, R3M, and Voltron models), as well as the PyTorch XLA pretraining script (again, for all models).
A "standard" PyTorch GPU/DDP pretraining implementation is in the works, but hopefully the logic transfers!
What's Changed
- Add Preprocessing Pipeline for Sth-Sth-v2 by @siddk in #8
- Add XLA Pretraining Script by @siddk in #10
Full Changelog: v0.0.1...v1.0.0
Initial Release - v0.0.1
Initial Voltron model release (marking as 0.0.1 because I don't know how to SemVer / wait until full pretraining reference exists). Contains pretrained checkpoints, automatic load functionality for:
- V-Cond (ViT-Small) on Sth-Sth-v2
- V-Dual (ViT-Small) on Sth-Sth-v2
- V-Gen (ViT-Small) on Sth-Sth-v2
- V-Cond (ViT-Base) on Sth-Sth-v2
As well as for our "data-locked" reproductions:
- R3M (ViT-Small) on Sth-Sth-v2
- R3M (ResNet-50) on Sth-Sth-v2
- MVP (ViT-Small) on Sth-Sth-v2
This accompanies the initial release of the Voltron Evaluation Suite – complete with usage examples!