-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[exporter/kafkaexporter] Messages Above Producer.MaxMessageBytes Will Be Retried Instead Of Dropped #30275
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Hello @rjduffner, thanks for filing this issue! It looks like the problem you're facing is a result of a lack of granularity in errors returned by Sarama, and the exporter's retry functionality. The kafka exporter is using Sarama as the kafka client. Sarama is failing with the error message you've shown, and returning as expected. However, on the collector side of things, as long as the error isn't permanent it will attempt to retry exporting the data. This error is not considered permanent by Otel or Sarama. I think the best option may be to add logic to the exporter to detect what kind of error was hit, and upgrade the error to permanent to allow dropping data instead of infinite retries. It looks like there's an open issue against Sarama that discussses detecting this kind of error, we could add similar logic to the exporter's functionality. |
Thanks for the explanation @crobert-1. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Component(s)
exporter/kafka
What happened?
Description
We are noticing that when a log gets created that is larger than Producer.MaxMessageBytes, the kafka exporter fails to send it and then retries.
We are not sure why its retrying as this message will never be able to be sent (as its above Producer.MaxMessageBytes).
Is there any clarity as to this choice? I understand we could use the on_error: drop feature (and we probably will) but I am curious as to why its not log and drop already?
Steps to Reproduce
Create a message larger than Producer.MaxMessageBytes and attempt to export it via the kafka exporter.
Expected Result
Error is logged and then message is dropped
Actual Result
Error is logged and then message is retried
Collector version
0.87.0
Environment information
Environment
EKS
Amazon Standard AMI
Splunk OTEL Collector Helm Chart
OpenTelemetry Collector configuration
No response
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: