Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

DDP training #8

Open
lpc-eol opened this issue Aug 15, 2023 · 1 comment
Open

DDP training #8

lpc-eol opened this issue Aug 15, 2023 · 1 comment

Comments

@lpc-eol
Copy link

lpc-eol commented Aug 15, 2023

Thank you for your excellent work. You used a single V100 GPU for training. Will the programme support distributed training? We are trying to use multiple 4090 GPUs on the same machine to repeat the experiments.

@lpc-eol lpc-eol changed the title distributed training DDP training Aug 15, 2023
@jvbsoares
Copy link
Collaborator

Thank you for your interest in our work!
Indeed, we have always trained with a single V100 GPU. We have not experimented with distributed training for this model.
Our implementation is based on the Keras Model class, so it's possible that it would be able to support distributed training with some small changes. We haven't actually tried to implement it, so we're not really sure. During training, the Keras Model is created from some function calls within load_or_create_trainer() at
https://github.com/yahoo/spivak/blob/master/spivak/application/model_creation.py#L200
The related model.compile() and model.fit() functions are inside the DefaultTrainer class:
https://github.com/yahoo/spivak/blob/master/spivak/models/trainer.py#L49
Please let me know if you end up trying the distributed training. Thank you!

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants