This library is currently in Alpha and currently does not have a stable release. The API may change and may not be backward compatible. If you have suggestions for improvements, please open a GitHub issue. We'd love to hear your feedback.
A light-weight library for adding fault tolerance to large-scale PyTorch distributed training workloads.
Requires Python >= 3.7 and PyTorch >= 1.11
From pip:
pip install --pre torchsnapshot-nightly
From source:
git clone https://github.com/facebookresearch/torchsnapshot
cd torchsnapshot
pip install -r requirements.txt
python setup.py install
- Stateful object - an object that whose state can be obtained via
.state_dict()
and restored via.load_state_dict()
. Most PyTorch components (e.g.Module
,Optimizer
,LRScheduler
) already implement this protocol. - App state - the application state described using multiple stateful objects.
- Snapshot - the persisted app state.
Describing the application state with multiple stateful objects:
app_state = {"model": model, "optimizer": optimizer}
Taking a snapshot of the application state:
from torchsnapshot import Snapshot
# File System
snapshot = Snapshot.take(path="/foo/bar/baz", app_state=app_state)
# S3
snapshot = Snapshot.take(path="s3://foo/bar", app_state=app_state)
# Google Cloud Storage
snapshot = Snapshot.take(path="gcs://foo/bar", app_state=app_state)
Referencing an existing snapshot:
snapshot = Snapshot(path="foo/bar/baz")
Restoring the application state from a snapshot:
snapshot.restore(app_state=app_state)
See the example directory for more examples.
torchsnapshot is BSD licensed, as found in the LICENSE file.