You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It turns out that this will upload my model to an S3 bucket every time the deploy method is run even though everything is running completely locally.
If testing this completely locally, and if the correct Docker images have been pulled beforehand, then this should not even need an internet connection, but definitely not waste time uploading the model and filling my S3 space with all those model files.
Another reason why this bug is really annoying is that there seems to be a request timeout which kicks in when the model.tar.gz file is large and the upload internet speed is not fast enough to upload the model within that request timeout, making it impossible to use the SDK for testing on such a machine.
I tried to test code for deploying a PyTorchModel on SageMaker fully locally doing something like:
I have also created the file ~/.sagemaker/config.yaml as described in https://sagemaker.readthedocs.io/en/stable/overview.html#local-mode
It turns out that this will upload my model to an S3 bucket every time the
deploy
method is run even though everything is running completely locally.If testing this completely locally, and if the correct Docker images have been pulled beforehand, then this should not even need an internet connection, but definitely not waste time uploading the model and filling my S3 space with all those model files.
See also #2451
The text was updated successfully, but these errors were encountered: