Skip to content

0.1.0

Latest
Compare
Choose a tag to compare
@edward-io edward-io released this 28 Oct 22:59
· 85 commits to main since this release

TorchSnapshot 0.1.0 (Beta Release)

A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind.

Performance

  • TorchSnapshot provides a fast checkpointing implementation employing various optimizations, including zero-copy serialization for most tensor types, overlapped device-to-host copy and storage I/O, parallelized storage I/O.
  • TorchSnapshot greatly speeds up checkpointing for DistributedDataParallel workloads by distributing the write load across all ranks (benchmark).
  • When host memory is abundant, TorchSnapshot allows training to resume before all storage I/O completes, reducing the time blocked by checkpoint saving.

Memory Usage

  • TorchSnapshot's memory usage adapts to the host's available resources, greatly reducing the chance of out-of-memory issues when saving and loading checkpoints.
  • TorchSnapshot supports efficient random access to individual objects within a snapshot, even when the snapshot is stored in a cloud object storage.

Usability

  • Simple APIs that are consistent between distributed and non-distributed workloads.
  • Out of the box integration with commonly used cloud object storage systems.
  • Automatic resharding (elasticity) on world size change for supported workloads (more details).

Full Changelog: https://github.com/pytorch/torchsnapshot/commits/0.1.0