Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

PyTorch FSDP integration in Trainer #17136

Merged
merged 4 commits into from
May 9, 2022
Merged

PyTorch FSDP integration in Trainer #17136

merged 4 commits into from
May 9, 2022

Conversation

pacman100
Copy link
Contributor

What does this PR do?

PyTorch recently upstreamed the Fairscale FSDP into PyTorch Distributed with additional optimizations. This PR is aimed at integrating it into Trainer API.

  • It enables Distributed Training at Scale. It's a wrapper for sharding Module parameters across data parallel workers. This is inspired by Xu et al. as well as the ZeRO Stage 3 from DeepSpeed.
  • PyTorch FSDP will focus more on production readiness and long-term support. This includes better integration with ecosystems and improvements on performance, usability, reliability, debuggability and composability.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented May 9, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for adding this!

@pacman100 pacman100 merged commit 05fc176 into main May 9, 2022
@pacman100 pacman100 deleted the smangrul-fsdp-add branch May 9, 2022 15:10
nandwalritik pushed a commit to nandwalritik/transformers that referenced this pull request May 10, 2022
* PyTorch FSDP integration in Trainer

* reformatting

make style and make quality are now compliant.

* Updating dependency check

* Trigger CI

Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
Narsil pushed a commit to Narsil/transformers that referenced this pull request May 12, 2022
* PyTorch FSDP integration in Trainer

* reformatting

make style and make quality are now compliant.

* Updating dependency check

* Trigger CI

Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
elusenji pushed a commit to elusenji/transformers that referenced this pull request Jun 12, 2022
* PyTorch FSDP integration in Trainer

* reformatting

make style and make quality are now compliant.

* Updating dependency check

* Trigger CI

Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants