kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-3703) PlaintextTransportLayer.close() doesn't complete outgoing writes
Date Fri, 02 Sep 2016 08:15:20 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-3703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457896#comment-15457896

ASF GitHub Bot commented on KAFKA-3703:

GitHub user rajinisivaram opened a pull request:


    KAFKA-3703: Flush outgoing writes before closing client selector

    Close client connections only after outgoing writes complete or timeout.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/rajinisivaram/kafka KAFKA-3703

Alternatively you can review and apply these changes as the patch at:


To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1817
commit 50f009bebe0beaf55cb5e00f9db8fcb626f1399a
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Date:   2016-09-02T07:55:49Z

    KAFKA-3703: Flush outgoing writes before closing client selector


> PlaintextTransportLayer.close() doesn't complete outgoing writes
> ----------------------------------------------------------------
>                 Key: KAFKA-3703
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3703
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Rajini Sivaram
>            Assignee: Rajini Sivaram
> Outgoing writes may be discarded when a connection is closed. For instance, when running
a producer with acks=0, a producer that writes data and closes the producer would expect to
see all writes to complete if there are no errors. But close() simply closes the channel and
socket which could result in outgoing data being discarded.

This message was sent by Atlassian JIRA

View raw message