Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Make read timeout configurable #763

Merged
merged 4 commits into from
Jun 14, 2024
Merged

Make read timeout configurable #763

merged 4 commits into from
Jun 14, 2024

Conversation

rzvoncek
Copy link
Contributor

Fixes #762

@rzvoncek rzvoncek force-pushed the radovan/s3-read-timeout branch from 97e62d7 to 9f263ef Compare May 16, 2024 09:59
@@ -136,7 +136,8 @@ def connect(self):
region_name=self.credentials.region,
signature_version='v4',
tcp_keepalive=True,
max_pool_connections=max_pool_size
max_pool_connections=max_pool_size,
read_timeout=int(self.config.read_timeout),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: is there a similar feature for GCP and Azure? It would be nice to apply the same settings for all 3.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they are available. I've added commits applying the read timeout config value to Azure and GCP read operations.

@rzvoncek rzvoncek force-pushed the radovan/s3-read-timeout branch from 07680a5 to 998d9da Compare June 13, 2024 14:42
Copy link

Quality Gate Passed Quality Gate passed

Issues
0 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code

See analysis details on SonarCloud

@adejanovski adejanovski merged commit 65d1323 into master Jun 14, 2024
24 of 29 checks passed
@@ -151,7 +154,7 @@ async def _download_blob(self, src: str, dest: str):
stream = await self.gcs_storage.download_stream(
bucket=self.bucket_name,
object_name=object_key,
timeout=-1,
timeout=self.read_timeout if self.read_timeout is not None else -1,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @rzvoncek,

I am a user of the k8ssandra operator on gcp and this change can make a breaking change in the restore case if the files are big. We are switching from infinite to 60 seconds timeout if I am not mistaken.

Do you think we should rollback the change for the gcp case or should I post a comment here k8ssandra/k8ssandra-operator#1353 to know if we know when this will be work on ?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make S3 read timeout configurable
3 participants