kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jun...@apache.org
Subject kafka git commit: trivial change to 0.9.0 docs to fix outdated ConsumerMetadataRequest
Date Fri, 20 Nov 2015 21:27:51 GMT
Repository: kafka
Updated Branches:
  refs/heads/0.9.0 81c89e91f -> cd7455c64


trivial change to 0.9.0 docs to fix outdated ConsumerMetadataRequest


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/cd7455c6
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/cd7455c6
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/cd7455c6

Branch: refs/heads/0.9.0
Commit: cd7455c64ba4199d76c35019d3be78bb6df0b25f
Parents: 81c89e9
Author: Jun Rao <junrao@gmail.com>
Authored: Fri Nov 20 13:26:40 2015 -0800
Committer: Jun Rao <junrao@gmail.com>
Committed: Fri Nov 20 13:27:43 2015 -0800

----------------------------------------------------------------------
 docs/implementation.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/cd7455c6/docs/implementation.html
----------------------------------------------------------------------
diff --git a/docs/implementation.html b/docs/implementation.html
index 0b603d4..9ae7d4e 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -243,7 +243,7 @@ Note that two kinds of corruption must be handled: truncation in which
an unwrit
 <h3><a id="distributionimpl" href="#distributionimpl">5.6 Distribution</a></h3>
 <h4><a id="impl_offsettracking" href="#impl_offsettracking">Consumer Offset Tracking</a></h4>
 <p>
-The high-level consumer tracks the maximum offset it has consumed in each partition and periodically
commits its offset vector so that it can resume from those offsets in the event of a restart.
Kafka provides the option to store all the offsets for a given consumer group in a designated
broker (for that group) called the <i>offset manager</i>. i.e., any consumer instance
in that consumer group should send its offset commits and fetches to that offset manager (broker).
The high-level consumer handles this automatically. If you use the simple consumer you will
need to manage offsets manually. This is currently unsupported in the Java simple consumer
which can only commit or fetch offsets in ZooKeeper. If you use the Scala simple consumer
you can discover the offset manager and explicitly commit or fetch offsets to the offset manager.
A consumer can look up its offset manager by issuing a ConsumerMetadataRequest to any Kafka
broker and reading the ConsumerMetadataResponse which will c
 ontain the offset manager. The consumer can then proceed to commit or fetch offsets from
the offsets manager broker. In case the offset manager moves, the consumer will need to rediscover
the offset manager. If you wish to manage your offsets manually, you can take a look at these
<a href="https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka">code
samples that explain how to issue OffsetCommitRequest and OffsetFetchRequest</a>.
+The high-level consumer tracks the maximum offset it has consumed in each partition and periodically
commits its offset vector so that it can resume from those offsets in the event of a restart.
Kafka provides the option to store all the offsets for a given consumer group in a designated
broker (for that group) called the <i>offset manager</i>. i.e., any consumer instance
in that consumer group should send its offset commits and fetches to that offset manager (broker).
The high-level consumer handles this automatically. If you use the simple consumer you will
need to manage offsets manually. This is currently unsupported in the Java simple consumer
which can only commit or fetch offsets in ZooKeeper. If you use the Scala simple consumer
you can discover the offset manager and explicitly commit or fetch offsets to the offset manager.
A consumer can look up its offset manager by issuing a GroupCoordinatorRequest to any Kafka
broker and reading the GroupCoordinatorResponse which will c
 ontain the offset manager. The consumer can then proceed to commit or fetch offsets from
the offsets manager broker. In case the offset manager moves, the consumer will need to rediscover
the offset manager. If you wish to manage your offsets manually, you can take a look at these
<a href="https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka">code
samples that explain how to issue OffsetCommitRequest and OffsetFetchRequest</a>.
 </p>
 
 <p>


Mime
View raw message