-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Make read timeout configurable #763
Conversation
97e62d7
to
9f263ef
Compare
@@ -136,7 +136,8 @@ def connect(self): | |||
region_name=self.credentials.region, | |||
signature_version='v4', | |||
tcp_keepalive=True, | |||
max_pool_connections=max_pool_size | |||
max_pool_connections=max_pool_size, | |||
read_timeout=int(self.config.read_timeout), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: is there a similar feature for GCP and Azure? It would be nice to apply the same settings for all 3.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, they are available. I've added commits applying the read timeout config value to Azure and GCP read operations.
07680a5
to
998d9da
Compare
Quality Gate passedIssues Measures |
@@ -151,7 +154,7 @@ async def _download_blob(self, src: str, dest: str): | |||
stream = await self.gcs_storage.download_stream( | |||
bucket=self.bucket_name, | |||
object_name=object_key, | |||
timeout=-1, | |||
timeout=self.read_timeout if self.read_timeout is not None else -1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @rzvoncek,
I am a user of the k8ssandra operator on gcp and this change can make a breaking change in the restore case if the files are big. We are switching from infinite to 60 seconds timeout if I am not mistaken.
Do you think we should rollback the change for the gcp case or should I post a comment here k8ssandra/k8ssandra-operator#1353 to know if we know when this will be work on ?
Fixes #762