-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Expose a single timeout setting in CRDs #1045
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1045 +/- ##
==========================================
+ Coverage 69.07% 69.20% +0.12%
==========================================
Files 110 110
Lines 7017 7049 +32
==========================================
+ Hits 4847 4878 +31
- Misses 1880 1881 +1
Partials 290 290
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
@pavolloffay The oauth proxy image needs to be updated. Install Minio:
Create TempoStack with timeout set.
Check the status of pods:
The oauth proxy image used is quay.io/openshift/origin-oauth-proxy:4.12 which doesn't have the --upstream-timeout flag.
Timeout is also not set on the route:
|
updated |
@pavolloffay With the latest change the haproxy annotation is added to the route but the custom annotations are not added now.
|
component: tempostack, tempomonolithic | ||
|
||
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). | ||
note: Add unified timeout configuration. It changes the default to 30s. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking out loud, should we have a single timeout or different timeouts for read and write path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please read the PR description. It has some explanation on this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I jumped directly to the code diff 😬
Ok, makes sense.
@@ -248,6 +248,7 @@ func deployment(params manifestutils.Params, rbacCfgHash string, tenantsCfgHash | |||
fmt.Sprintf("--web.internal.listen=0.0.0.0:%d", manifestutils.GatewayPortInternalHTTPServer), // serves health checks | |||
fmt.Sprintf("--traces.write.otlpgrpc.endpoint=%s:%d", naming.ServiceFqdn(tempo.Namespace, tempo.Name, manifestutils.DistributorComponentName), manifestutils.PortOtlpGrpcServer), // Tempo Distributor gRPC upstream | |||
fmt.Sprintf("--traces.write.otlphttp.endpoint=%s://%s:%d", httpScheme(params.CtrlConfig.Gates.HTTPEncryption), naming.ServiceFqdn(tempo.Namespace, tempo.Name, manifestutils.DistributorComponentName), manifestutils.PortOtlpHttp), // Tempo Distributor HTTP upstream | |||
fmt.Sprintf("--traces.write-timeout=%s", params.Tempo.Spec.Timeout.Duration.String()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a setting for the read path? The gateway also proxies the Tempo API (i.e. TraceQL API)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
@pavolloffay For monolithic mode. The http_server_* config is not set.
|
thx @IshwarKanse, fixed in the last commit |
@pavolloffay I verified the PR manually using TempoStack and TempoMonolithic instances with/without multitenancy enabled. We are good to merge it. We are currently blocked on the e2e tests due to https://issues.redhat.com/browse/TRACING-4703 after its fixed, I'll update the tests to add more assertions to check the timeout. |
Signed-off-by: Pavol Loffay <p.loffay@gmail.com>
This PR adds a single timeout setting defaulting to 30s (aligning with upstream grafana).
The timeout is both for read and write because gateway/route/oauth-proxy unity it as well.
Gateway
--traces.write-timeout
- both for read and write (uses interceptor with context cancellation)oauth proxy
--upstream-timeout
- for read only because the oauth-proxy is only in-front of read APIs (it setstransport.ResponseHeaderTimeout
on the HTTP server https://github.com/openshift/oauth-proxy/pull/258/files#diff-7e1e132991c10dc0110f813e8e1057f24288bfb9d7d5bdd54d468c496e01110bR123)https://docs.openshift.com/container-platform/4.17/networking/routes/route-configuration.html#nw-configuring-route-timeouts_route-configuration
Route
haproxy.router.openshift.io/timeout