kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jun Rao (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-50) kafka intra-cluster replication support
Date Thu, 05 Jan 2012 19:06:39 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-50?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13180712#comment-13180712

Jun Rao commented on KAFKA-50:

Here is a breakdown of all the jiras and their dependencies:

1. kafka-47: create/delete data structures in ZK, automatically create topic, and use them
(making partitions logical, supporting only 1 replica, no failure support). (L1)
  1.1 kafka-237: create/delete ZK path for a topic in an admin tool (L0)
  1.2 kafka-238: add a getTopicMetaData method in broker and expose it to producer
  1.3 kafka-239: Wire existing producer and consumer to use the new ZK data structure
2. kafka-202: decouple request handler and socket sever; enabling long poll in the consumer
3. kafka-240: implement new producer and consumer request format (L1) 
4. kafka-49: add ack to ProduceRequest (L2). depending on #3
5. kafka-48: long poll in consumer (L2). depending on #2
6. kafka-44: commit thread, replica fetcher thread (L3). depending on #1, #4, #5
7. kafka-45: broker starting up, leader election (L3). depending on #1
8. kafka-46: various ZK listeners (L3). depending on #1
9. kafka-43: move master to preferred replica when possible (L4). optimization
10. kafka-42: rebalance partition with new brokers (L4). extra feature

> kafka intra-cluster replication support
> ---------------------------------------
>                 Key: KAFKA-50
>                 URL: https://issues.apache.org/jira/browse/KAFKA-50
>             Project: Kafka
>          Issue Type: New Feature
>            Reporter: Jun Rao
>            Assignee: Jun Rao
>             Fix For: 0.8
>         Attachments: kafka_replication_detailed_design_v2.pdf, kafka_replication_highlevel_design.pdf,
> Currently, Kafka doesn't have replication. Each log segment is stored in a single broker.
This limits both the availability and the durability of Kafka. If a broker goes down, all
log segments stored on that broker become unavailable to consumers. If a broker dies permanently
(e.g., disk failure), all unconsumed data on that node is lost forever. Our goal is to replicate
every log segment to multiple broker nodes to improve both the availability and the durability.

> We'd like to support the following in Kafka replication: 
> 1. Configurable synchronous and asynchronous replication 
> 2. Small unavailable window (e.g., less than 5 seconds) during broker failures 
> 3. Auto recovery when a failed broker rejoins 
> 4. Balanced load when a broker fails (i.e., the load on the failed broker is evenly spread
among multiple surviving brokers)

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message