kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From j...@apache.org
Subject kafka git commit: MINOR: Remove no longer required --new-consumer switch in docs
Date Tue, 27 Sep 2016 23:50:05 GMT
Repository: kafka
Updated Branches:
  refs/heads/0.10.1 aadda5aac -> dfdf2e6cc


MINOR: Remove no longer required --new-consumer switch in docs

Author: Ismael Juma <ismael@juma.me.uk>

Reviewers: Jason Gustafson <jason@confluent.io>

Closes #1905 from ijuma/no-new-consumer-switch-in-examples

(cherry picked from commit 61d3378bc84914a521a65cdfffb7299928fa8671)
Signed-off-by: Jason Gustafson <jason@confluent.io>


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/dfdf2e6c
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/dfdf2e6c
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/dfdf2e6c

Branch: refs/heads/0.10.1
Commit: dfdf2e6cc026caef0dd02b2d870ef7bf86508b22
Parents: aadda5a
Author: Ismael Juma <ismael@juma.me.uk>
Authored: Tue Sep 27 16:49:46 2016 -0700
Committer: Jason Gustafson <jason@confluent.io>
Committed: Tue Sep 27 16:50:01 2016 -0700

----------------------------------------------------------------------
 .../scala/kafka/admin/ConsumerGroupCommand.scala  |  4 ++--
 docs/ops.html                                     | 18 +++++++++++-------
 docs/security.html                                |  2 +-
 docs/upgrade.html                                 |  3 +++
 4 files changed, 17 insertions(+), 10 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/dfdf2e6c/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala b/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala
index 1cc63b1..5de2d26 100755
--- a/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala
+++ b/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala
@@ -439,8 +439,8 @@ object ConsumerGroupCommand {
 
         if (options.has(deleteOpt))
           CommandLineUtils.printUsageAndDie(parser, s"Option $deleteOpt is only valid with
$zkConnectOpt. Note that " +
-            "there's no need to delete group metadata for the new consumer as it is automatically
deleted when the last " +
-            "member leaves")
+            "there's no need to delete group metadata for the new consumer as the group is
deleted when the last " +
+            "committed offset for that group expires.")
       }
 
       if (options.has(describeOpt))

http://git-wip-us.apache.org/repos/asf/kafka/blob/dfdf2e6c/docs/ops.html
----------------------------------------------------------------------
diff --git a/docs/ops.html b/docs/ops.html
index 0b3f6e3..a59e134 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -147,10 +147,14 @@ Note, however, after 0.9.0, the kafka.tools.ConsumerOffsetChecker tool
is deprec
 
 <h4><a id="basic_ops_consumer_group" href="#basic_ops_consumer_group">Managing
Consumer Groups</a></h4>
 
-With the ConsumerGroupCommand tool, we can list, delete, or describe consumer groups. For
example, to list all consumer groups across all topics:
+With the ConsumerGroupCommand tool, we can list, describe, or delete consumer groups. Note
that deletion is only available when the group metadata is stored in
+ZooKeeper. When using the <a href="http://kafka.apache.org/documentation.html#newconsumerapi">new
consumer API</a> (where
+the broker handles coordination of partition handling and rebalance), the group is deleted
when the last committed offset for that group expires.
+
+For example, to list all consumer groups across all topics:
 
 <pre>
- &gt; bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
+ &gt; bin/kafka-consumer-groups.sh --bootstrap-server broker1:9092 --list
 
 test-consumer-group
 </pre>
@@ -158,17 +162,17 @@ test-consumer-group
 To view offsets as in the previous example with the ConsumerOffsetChecker, we "describe"
the consumer group like this:
 
 <pre>
- &gt; bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --describe --group test-consumer-group
+ &gt; bin/kafka-consumer-groups.sh --bootstrap-server broker1:9092 --describe --group
test-consumer-group
 
 GROUP                          TOPIC                          PARTITION  CURRENT-OFFSET 
LOG-END-OFFSET  LAG             OWNER
-test-consumer-group            test-foo                       0          1              
3               2               test-consumer-group_postamac.local-1456198719410-29ccd54f-0
+test-consumer-group            test-foo                       0          1              
3               2               consumer-1_/127.0.0.1
 </pre>
 
-
-When you're using the <a href="http://kafka.apache.org/documentation.html#newconsumerapi">new
consumer API</a> where the broker handles coordination of partition handling and rebalance,
you can manage the groups with the "--new-consumer" flags:
+If you are using the old high-level consumer and storing the group metadata in ZooKeeper
(i.e. <code>offsets.storage=zookeeper</code>), pass
+<code>--zookeeper</code> instead of <code>bootstrap-server</code>:
 
 <pre>
- &gt; bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server broker1:9092 --list
+ &gt; bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
 </pre>
 
 <h4><a id="basic_ops_cluster_expansion" href="#basic_ops_cluster_expansion">Expanding
your cluster</a></h4>

http://git-wip-us.apache.org/repos/asf/kafka/blob/dfdf2e6c/docs/security.html
----------------------------------------------------------------------
diff --git a/docs/security.html b/docs/security.html
index d51c340..a00bbf6 100644
--- a/docs/security.html
+++ b/docs/security.html
@@ -204,7 +204,7 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled
but
         Examples using console-producer and console-consumer:
         <pre>
         kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config
client-ssl.properties
-        kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --new-consumer
--consumer.config client-ssl.properties</pre>
+        kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config
client-ssl.properties</pre>
     </li>
 </ol>
 <h3><a id="security_sasl" href="#security_sasl">7.3 Authentication using SASL</a></h3>

http://git-wip-us.apache.org/repos/asf/kafka/blob/dfdf2e6c/docs/upgrade.html
----------------------------------------------------------------------
diff --git a/docs/upgrade.html b/docs/upgrade.html
index 2174018..6bf7c66 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -59,6 +59,9 @@ Note: Because new protocols are introduced, it is important to upgrade your
Kafk
 <ul>
     <li> The new Java consumer is no longer in beta and we recommend it for all new
development. The old Scala consumers are still supported, but they will be deprecated in the
next release
          and will be removed in a future major release. </li>
+    <li> The <code>--new-consumer</code>/<code>--new.consumer</code>
switch is no longer required to use tools like MirrorMaker and the Console Consumer with the
new consumer; one simply
+         needs to pass a Kafka broker to connect to instead of the ZooKeeper ensemble. In
addition, usage of the Console Consumer with the old consumer has been deprecated and it will
be
+         removed in a future major release. </li>
     <li> Kafka clusters can now be uniquely identified by a cluster id. It will be
automatically generated when a broker is upgraded to 0.10.1.0. The cluster id is available
via the kafka.server:type=KafkaServer,name=ClusterId metric and it is part of the Metadata
response. Serializers, client interceptors and metric reporters can receive the cluster id
by implementing the ClusterResourceListener interface. </li>
     <li> The BrokerState "RunningAsController" (value 4) has been removed. Due to a
bug, a broker would only be in this state briefly before transitioning out of it and hence
the impact of the removal should be minimal. The recommended way to detect if a given broker
is the controller is via the kafka.controller:type=KafkaController,name=ActiveControllerCount
metric. </li>
     <li> The new Java Consumer now allows users to search offsets by timestamp on partitions.


Mime
View raw message