Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 6B80D200B9B for ; Wed, 28 Sep 2016 01:50:07 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 69F20160AE5; Tue, 27 Sep 2016 23:50:07 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 89DE4160AD2 for ; Wed, 28 Sep 2016 01:50:06 +0200 (CEST) Received: (qmail 9637 invoked by uid 500); 27 Sep 2016 23:50:05 -0000 Mailing-List: contact commits-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@kafka.apache.org Delivered-To: mailing list commits@kafka.apache.org Received: (qmail 9628 invoked by uid 99); 27 Sep 2016 23:50:05 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Sep 2016 23:50:05 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 8D7D2DFCC0; Tue, 27 Sep 2016 23:50:05 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: jgus@apache.org To: commits@kafka.apache.org Message-Id: X-Mailer: ASF-Git Admin Mailer Subject: kafka git commit: MINOR: Remove no longer required --new-consumer switch in docs Date: Tue, 27 Sep 2016 23:50:05 +0000 (UTC) archived-at: Tue, 27 Sep 2016 23:50:07 -0000 Repository: kafka Updated Branches: refs/heads/0.10.1 aadda5aac -> dfdf2e6cc MINOR: Remove no longer required --new-consumer switch in docs Author: Ismael Juma Reviewers: Jason Gustafson Closes #1905 from ijuma/no-new-consumer-switch-in-examples (cherry picked from commit 61d3378bc84914a521a65cdfffb7299928fa8671) Signed-off-by: Jason Gustafson Project: http://git-wip-us.apache.org/repos/asf/kafka/repo Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/dfdf2e6c Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/dfdf2e6c Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/dfdf2e6c Branch: refs/heads/0.10.1 Commit: dfdf2e6cc026caef0dd02b2d870ef7bf86508b22 Parents: aadda5a Author: Ismael Juma Authored: Tue Sep 27 16:49:46 2016 -0700 Committer: Jason Gustafson Committed: Tue Sep 27 16:50:01 2016 -0700 ---------------------------------------------------------------------- .../scala/kafka/admin/ConsumerGroupCommand.scala | 4 ++-- docs/ops.html | 18 +++++++++++------- docs/security.html | 2 +- docs/upgrade.html | 3 +++ 4 files changed, 17 insertions(+), 10 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/kafka/blob/dfdf2e6c/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala ---------------------------------------------------------------------- diff --git a/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala b/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala index 1cc63b1..5de2d26 100755 --- a/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala +++ b/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala @@ -439,8 +439,8 @@ object ConsumerGroupCommand { if (options.has(deleteOpt)) CommandLineUtils.printUsageAndDie(parser, s"Option $deleteOpt is only valid with $zkConnectOpt. Note that " + - "there's no need to delete group metadata for the new consumer as it is automatically deleted when the last " + - "member leaves") + "there's no need to delete group metadata for the new consumer as the group is deleted when the last " + + "committed offset for that group expires.") } if (options.has(describeOpt)) http://git-wip-us.apache.org/repos/asf/kafka/blob/dfdf2e6c/docs/ops.html ---------------------------------------------------------------------- diff --git a/docs/ops.html b/docs/ops.html index 0b3f6e3..a59e134 100644 --- a/docs/ops.html +++ b/docs/ops.html @@ -147,10 +147,14 @@ Note, however, after 0.9.0, the kafka.tools.ConsumerOffsetChecker tool is deprec

Managing Consumer Groups

-With the ConsumerGroupCommand tool, we can list, delete, or describe consumer groups. For example, to list all consumer groups across all topics: +With the ConsumerGroupCommand tool, we can list, describe, or delete consumer groups. Note that deletion is only available when the group metadata is stored in +ZooKeeper. When using the new consumer API (where +the broker handles coordination of partition handling and rebalance), the group is deleted when the last committed offset for that group expires. + +For example, to list all consumer groups across all topics:
- > bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
+ > bin/kafka-consumer-groups.sh --bootstrap-server broker1:9092 --list
 
 test-consumer-group
 
@@ -158,17 +162,17 @@ test-consumer-group To view offsets as in the previous example with the ConsumerOffsetChecker, we "describe" the consumer group like this:
- > bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --describe --group test-consumer-group
+ > bin/kafka-consumer-groups.sh --bootstrap-server broker1:9092 --describe --group test-consumer-group
 
 GROUP                          TOPIC                          PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             OWNER
-test-consumer-group            test-foo                       0          1               3               2               test-consumer-group_postamac.local-1456198719410-29ccd54f-0
+test-consumer-group            test-foo                       0          1               3               2               consumer-1_/127.0.0.1
 
- -When you're using the new consumer API where the broker handles coordination of partition handling and rebalance, you can manage the groups with the "--new-consumer" flags: +If you are using the old high-level consumer and storing the group metadata in ZooKeeper (i.e. offsets.storage=zookeeper), pass +--zookeeper instead of bootstrap-server:
- > bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server broker1:9092 --list
+ > bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
 

Expanding your cluster

http://git-wip-us.apache.org/repos/asf/kafka/blob/dfdf2e6c/docs/security.html ---------------------------------------------------------------------- diff --git a/docs/security.html b/docs/security.html index d51c340..a00bbf6 100644 --- a/docs/security.html +++ b/docs/security.html @@ -204,7 +204,7 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled but Examples using console-producer and console-consumer:
         kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config client-ssl.properties
-        kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --new-consumer --consumer.config client-ssl.properties
+ kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties

7.3 Authentication using SASL

http://git-wip-us.apache.org/repos/asf/kafka/blob/dfdf2e6c/docs/upgrade.html ---------------------------------------------------------------------- diff --git a/docs/upgrade.html b/docs/upgrade.html index 2174018..6bf7c66 100644 --- a/docs/upgrade.html +++ b/docs/upgrade.html @@ -59,6 +59,9 @@ Note: Because new protocols are introduced, it is important to upgrade your Kafk
  • The new Java consumer is no longer in beta and we recommend it for all new development. The old Scala consumers are still supported, but they will be deprecated in the next release and will be removed in a future major release.
  • +
  • The --new-consumer/--new.consumer switch is no longer required to use tools like MirrorMaker and the Console Consumer with the new consumer; one simply + needs to pass a Kafka broker to connect to instead of the ZooKeeper ensemble. In addition, usage of the Console Consumer with the old consumer has been deprecated and it will be + removed in a future major release.
  • Kafka clusters can now be uniquely identified by a cluster id. It will be automatically generated when a broker is upgraded to 0.10.1.0. The cluster id is available via the kafka.server:type=KafkaServer,name=ClusterId metric and it is part of the Metadata response. Serializers, client interceptors and metric reporters can receive the cluster id by implementing the ClusterResourceListener interface.
  • The BrokerState "RunningAsController" (value 4) has been removed. Due to a bug, a broker would only be in this state briefly before transitioning out of it and hence the impact of the removal should be minimal. The recommended way to detect if a given broker is the controller is via the kafka.controller:type=KafkaController,name=ActiveControllerCount metric.
  • The new Java Consumer now allows users to search offsets by timestamp on partitions.