kafka-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Mackles <pmack...@adobe.com>
Subject 0.8 high-level consumer error handling
Date Tue, 07 Jan 2014 18:00:29 GMT
Hi - I noticed that if a kafka cluster goes away entirely, the high-level consumer will endlessly
try to fetch metadata until the cluster comes back up, never bubbling the error condition
up to the application. While I see a setting to control the interval at which it reconnects,
I don't see anything to tell it when to just give up. I think it would be useful if there
were a way for the application to detect this condition and possibly take some sort of action.
Either a max-retries setting and/or some sort of flag that can be tested after a timeout.
Is that capability already there? Is there a known workaround for this?

Thanks,
Paul

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message