Skip to content

Commit

Permalink
[trainer] sharded _load_best_model (huggingface#17150)
Browse files Browse the repository at this point in the history
* [trainer] sharded _load_best_model

probably needs a test?

* undo delete
  • Loading branch information
stas00 authored and ArthurZucker committed May 12, 2022
1 parent de1a94d commit 72018c2
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/transformers/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -1705,7 +1705,7 @@ def _load_best_model(self):
# If the model is on the GPU, it still works!
load_result = self.model.load_state_dict(state_dict, strict=False)
self._issue_warnings_after_load(load_result)
elif os.path.exists(best_model_path, os.path.join(self.state.best_model_checkpoint, WEIGHTS_INDEX_NAME)):
elif os.path.exists(os.path.join(self.state.best_model_checkpoint, WEIGHTS_INDEX_NAME)):
# Best model is a sharded checkpoint
load_result = load_sharded_checkpoint(self.model, self.state.best_model_checkpoint, strict=False)
self._issue_warnings_after_load(load_result)
Expand Down

0 comments on commit 72018c2

Please # to comment.