-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
badly formatted logs on stderr #896
Comments
@tmszdmsk is that an output from
|
By the way: this would probably belong to https://github.com/jaegertracing/jaeger-kubernetes instead |
Yep, it is raw output from
It looks like this is a problem with logging problems on |
I just noticed that your logs are showing the contents of a span. Are you able to get more context for that log entry? |
Given that the contents of a span is being logged, I would guess, without looking at the code, that the Elasticsearch storage plugin is logging out its buffer in case of failures. Are you able to dig the first of such message? Bonus points for the uber pun :-) |
I've been able to find sth like this:
which suggests #779 but in context of this issue we should maybe consider limiting the size of the output (or close it and fix original issue) |
I don't see what the issue is. The logs are in JSON format by default, containing fields like timestamp, level, caller. When ES error happens, it is logged with: logger.Error("Elasticsearch could not process bulk request", zap.Error(err),
zap.Any("response", response), zap.String("requests", buffer.String())) Here since request and response are themselves JSON strings, in order to fit in the overall log format they have to be escaped. |
@yurishkuro yep, the issue is different than I originally described. It turned out that Elasticsearch backend logs the whole buffer of pending messages. The log is properly formatted but one of the fields is enormous to the extent that commonly used logging infrastructure (ELK) decides to split the message into many, smaller, but unparsable logs. The question in the context of this bug is: should we change how this particular error is logged so that the whole buffer isn't pushed to |
I think it would be sufficient to only log the ES response, assuming it won't include the input data. Would you like to create a pull request? |
Sure, I was just setting up a dev env ;) |
Requirement - what kind of business use case are you trying to solve?
Logs printed on stderr are not properly formatted. It slows debugging problems with
jaeger-collector
Problem - what in Jaeger blocks you from solving the requirement?
Our
jaeger-collector
is deployed on Kubernetes. Logs from stdout/stderr are automatically collected and pushed into ELK stack. Because of badly formatted JSON Logstash is unable to parse the logs.example (a little bit anonymised)
kubectl logs
output fromjaeger-collector
pod.As you can see the JSON is escaped so that it is not interpreted as a JSON.
jaeger-collector v1.5.0
Proposal - what do you suggest to solve the problem or improve the existing situation?
Properly formatted JSON on
stderr
Any open questions to address
N/A
The text was updated successfully, but these errors were encountered: