kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Maxime Brugidou (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-691) Fault tolerance broken with replication factor 1
Date Sat, 12 Jan 2013 18:02:12 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13551978#comment-13551978
] 

Maxime Brugidou commented on KAFKA-691:
---------------------------------------

Should i make another patch? I'll try on Monday.

1. It would probably require yet another config variable like "producer.metadata.request.batch.size"
or something like that.
2. Should it be batched for every updateInfo() or just during the metadata refresh? It could
help if we do the former because failing messages from many different topics could probably
never go through if the metadata request timeouts.
3. Isn'it getting a little convoluted? Maybe i am missing something but the producer side
is getting trickier.
4. Please note that I also opened KAFKA-693 about the consumer side. And I'd love to submit
a patch but the rebalance logic seems complex so I'd prefer to have some insights first before
going in the wrong direction.
                
> Fault tolerance broken with replication factor 1
> ------------------------------------------------
>
>                 Key: KAFKA-691
>                 URL: https://issues.apache.org/jira/browse/KAFKA-691
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.8
>            Reporter: Jay Kreps
>            Assignee: Maxime Brugidou
>             Fix For: 0.8
>
>         Attachments: KAFKA-691-v1.patch, KAFKA-691-v2.patch
>
>
> In 0.7 if a partition was down we would just send the message elsewhere. This meant that
the partitioning was really more of a "stickiness" then a hard guarantee. This made it impossible
to depend on it for partitioned, stateful processing.
> In 0.8 when running with replication this should not be a problem generally as the partitions
are now highly available and fail over to other replicas. However in the case of replication
factor = 1 no longer really works for most cases as now a dead broker will give errors for
that broker.
> I am not sure of the best fix. Intuitively I think this is something that should be handled
by the Partitioner interface. However currently the partitioner has no knowledge of which
nodes are available. So you could use a random partitioner, but that would keep going back
to the down node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message