kafka-jira mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ted Yu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-6325) Producer.flush() doesn't throw exception on timeout
Date Thu, 07 Dec 2017 15:48:00 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16282030#comment-16282030

Ted Yu commented on KAFKA-6325:

I assume you have modified your producer code to accommodate this behavior.

Looks like option #2 can be adopted.

> Producer.flush() doesn't throw exception on timeout
> ---------------------------------------------------
>                 Key: KAFKA-6325
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6325
>             Project: Kafka
>          Issue Type: Bug
>          Components: producer 
>            Reporter: Erik Scheuter
>         Attachments: FlushTest.java
> Reading the javadoc of the flush() method we assumed an exception would've been thrown
when an error occurs. This would make the code more understandable as we don't have to return
a list of futures if we want to send multiple records to kafka and eventually call future.get().
> When send() is called, the metadata is retrieved and send is blocked on this process.
When this process fails (no brokers) an FutureFailure is returned. 
> When you just flush; no exceptions will be thrown (in contrast to future.get()). Ofcourse
you can implement callbacks in the send method.
> I think there are two solutions:
> * Change flush() (& doSend()) and throw exceptions
> * Change the javadoc and describe the scenario you can lose events because no exceptions
are thrown and the events are not sent.
> I added an unittest to show the behaviour. Kafka doesn't have to be available for this.

This message was sent by Atlassian JIRA

View raw message