-
Notifications
You must be signed in to change notification settings - Fork 790
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Jaeger/Zipkin Exporter Performance #1274
Conversation
/// </summary> | ||
public TimeSpan TimeoutSeconds { get; set; } = TimeSpan.FromSeconds(10); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TimeoutSeconds
wasn't actually being used. The ActivityProcessor now owns timeout, so I just removed it.
Codecov Report
@@ Coverage Diff @@
## master #1274 +/- ##
==========================================
- Coverage 79.12% 79.12% -0.01%
==========================================
Files 215 215
Lines 6176 6175 -1
==========================================
- Hits 4887 4886 -1
Misses 1289 1289
|
It makes total sense to leverage the HTTP/TCP layer to do buffering. For smaller events we might consider turning off the Nagle's Algorithm (e.g. metrics). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
nit: I would change the title to Jaeger/Zipkin Exporter Performance
Changes
A round of performance tweaks for Zipkin. It now uses the
Batch<Activity
throughout (instead of making a copy) and flushes small (4k) chunks to the Http stream as they are ready.The new version is slightly slower. If I make the chunk size bigger, it can be faster. But I think that is cheating for the sake of benchmarking. In a real-world scenario, we want to start sending bytes as soon as we can. Today we buffer everything into memory and then send it all at the end, that's why the memory usage is so high. I think lower chunks will actually make the export finish faster for real-world usage. I did the same thing (lower packet size) for JaegerExporter. Size reduced from 65000 to 4096. Socket default send buffer is 8192. For Jaeger we want to get the packets to the socket and let the networking algorithm handle the transmission, it has enough buffering we don't need our own 😄
/cc @reyang Curious what your thoughts are on this 4096 size change?
Benchmarks
Before:
After:
TODOs
CHANGELOG.md
updated for non-trivial changes