perf(ext/fetch): improve decompression throughput by not using tower_http::decompression
#25800
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This commit improves the throughput when a Deno process is running as a proxy server that deals with compressed data from the upstream server.
We have seen a performance degradation since v1.45.2 when we run a HTTP server with Deno with a particular setting where it fetches compressed data from the upstream server and forwards it to the end client. After some investigation, it's turned out that tower_http::decompression causes this issue, and that this issue is resolved if we manually implement decompression logic using
async-compression
directly.This figure shows how the performance changes for different versions (lower is better), and verifies this patch fixes the issue:
(See also https://github.com/magurotuna/deno_fetch_decompression_throughput for how this result was obtained)
Probably we could discover a potential bottleneck in
tower_http
, but given that manual implementation just takes less than 100 lines of code, it makes more sense to maintain our own code rather than depending on the third party, in my opinion.Fixes #25798