-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
regression: unable to handle detect errors related to messages being too large #2655
Comments
We're open to changing our code, but as a result of this change we've had to rewrite it as if err == sarama.ErrMessageSizeTooLarge || strings.Contains(err.Error(), "MaxMessageBytes") { and I feel like there should be a better option? |
@ae-govau would this be sufficient? if errors.Is(err, sarama.ConfigurationError) ||
errors.Is(err, sarama.ErrMessageSizeTooLarge) {
// log non-retriable error and don’t retry send
} |
Thanks for the responses. We aren't that keen on checking for a generic configuration error, as we would like to be alerted to any other config errors - we just want to know when the problem is that the message is too big - since we deal with that failure mode quite differently than others. Noting that the calculated size of the message (as I understand it) is bigger than the payload itself, it's not clear how we can calculate that before trying to send it and thus we need to be able to figure it out from the error itself. An API for "is this a message too big error" would also be fine. |
Ran into the exact problem, and I am really reluctant to put in a string-based error type check like I am afraid someone will make a "error message readability improvement" or a casing change that is seemingly harmless and cause a problem for us. I appreciate the need to distinguish a client side error from broker error. But reusing What makes it worse is that it is not a behavior in the mock producer, so I can't just add a unit test to detect the change in behavior here but have to use a real kafka client setup to detect it. In any case, it would be nice to be some error value that we can reference in the code, or similar to what @ae-govau suggested, an API for "is this a message too big error". |
Thank you for taking the time to raise this issue. However, it has not had any activity on it in the past 90 days and will be closed in 30 days if no updates occur. |
This is still an issue for us. I might make a PR to try to address. |
PR #2848 is opened so that clients can call: // IsMessageSizeTooLarge returns true if the error relates the message size
// being either too large as reported by the broker, or too large because it
// exceeds the configured maximum size.
func IsMessageSizeTooLarge(err error) bool {
... |
For most of this library's existence it has returned ErrMessageSizeTooLarge when the message exceeded the configured size. For a short period in 2019 this error was renamed (IBM#1218) but shortly revered back (IBM#1262). Later in 2023 this error was changed to a ConfigurationError (IBM#2628) to fix IBM#2137, however this has caused issues with clients who rely on the previous error code being distinct from other ConfigurationError conditions (IBM#2655). This commit reverts to previous behaviour, and adds a test to pickup if this changes again in the future. Signed-off-by: Adam Eijdenberg <adam.eijdenberg@defence.gov.au>
Just adding how this has impacted us. In our applications, the In this case, checking for any For now we will have to settle with checking the error string, but we would also be keen to see either a dedicated error type or some API like @ae-govau has suggested. |
Thank you for taking the time to raise this issue. However, it has not had any activity on it in the past 90 days and will be closed in 30 days if no updates occur. |
(creating issue from comments I made on #2628 a number weeks ago)
The recent change to error handling in #2628 broke some of our running code. We had code that would:
ie for our use-case, we want to ignore messages related to a message being too large. It's OK for us to throw these out. We used to check a size before sending into Sarama, but then we had some cases where our payload was under the limit, but once headers etc were added, it was over, so we switched to checking this error code.
Noting that #543 reports that previous error code has been returned for at least 8 years, this seems a reasonably significant change... (and a good example of Hyrum's Law )
It's now not clear to me how I should check for size related issues.
Interestingly the last PR which touched that line of code was 4 years ago in 2019 where #1262 reverts a change a few days earlier (#1218) that renamed a bunch of error variables because:
(that change is arguably less impactful than this one because it could be detected/corrected at compile time whereas we only caught this at runtime)
The text was updated successfully, but these errors were encountered: