Return-Path: X-Original-To: apmail-kafka-commits-archive@www.apache.org Delivered-To: apmail-kafka-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 19F2218F4C for ; Tue, 7 Jul 2015 20:37:11 +0000 (UTC) Received: (qmail 86791 invoked by uid 500); 7 Jul 2015 20:37:11 -0000 Delivered-To: apmail-kafka-commits-archive@kafka.apache.org Received: (qmail 86764 invoked by uid 500); 7 Jul 2015 20:37:11 -0000 Mailing-List: contact commits-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@kafka.apache.org Delivered-To: mailing list commits@kafka.apache.org Received: (qmail 86755 invoked by uid 99); 7 Jul 2015 20:37:10 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 07 Jul 2015 20:37:10 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id BE32BE04B0; Tue, 7 Jul 2015 20:37:10 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: guozhang@apache.org To: commits@kafka.apache.org Message-Id: <7c0db7cd83154d89bf96de6e00f1e3bc@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: kafka git commit: KAFKA-2313: javadoc fix for KafkaConsumer deserialization; reviewed by Guozhang Wang Date: Tue, 7 Jul 2015 20:37:10 +0000 (UTC) Repository: kafka Updated Branches: refs/heads/trunk 826276de1 -> f13dd8024 KAFKA-2313: javadoc fix for KafkaConsumer deserialization; reviewed by Guozhang Wang Project: http://git-wip-us.apache.org/repos/asf/kafka/repo Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/f13dd802 Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/f13dd802 Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/f13dd802 Branch: refs/heads/trunk Commit: f13dd8024d5bc1c11587a3b539556ea01e2c84ca Parents: 826276d Author: Onur Karaman Authored: Tue Jul 7 13:36:55 2015 -0700 Committer: Guozhang Wang Committed: Tue Jul 7 13:36:55 2015 -0700 ---------------------------------------------------------------------- .../apache/kafka/clients/consumer/KafkaConsumer.java | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/kafka/blob/f13dd802/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java ---------------------------------------------------------------------- diff --git a/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java b/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java index 1f0e515..7aa0760 100644 --- a/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java +++ b/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java @@ -131,8 +131,8 @@ import static org.apache.kafka.common.utils.Utils.min; * props.put("enable.auto.commit", "true"); * props.put("auto.commit.interval.ms", "1000"); * props.put("session.timeout.ms", "30000"); - * props.put("key.serializer", "org.apache.kafka.common.serializers.StringSerializer"); - * props.put("value.serializer", "org.apache.kafka.common.serializers.StringSerializer"); + * props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); + * props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); * KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props); * consumer.subscribe("foo", "bar"); * while (true) { @@ -159,8 +159,8 @@ import static org.apache.kafka.common.utils.Utils.min; * to it. If it stops heartbeating for a period of time longer than session.timeout.ms then it will be * considered dead and it's partitions will be assigned to another process. *

- * The serializers settings specify how to turn the objects the user provides into bytes. By specifying the string - * serializers we are saying that our record's key and value will just be simple strings. + * The deserializer settings specify how to turn bytes into objects. For example, by specifying string deserializers, we + * are saying that our record's key and value will just be simple strings. * *

Controlling When Messages Are Considered Consumed

* @@ -183,8 +183,8 @@ import static org.apache.kafka.common.utils.Utils.min; * props.put("enable.auto.commit", "false"); * props.put("auto.commit.interval.ms", "1000"); * props.put("session.timeout.ms", "30000"); - * props.put("key.serializer", "org.apache.kafka.common.serializers.StringSerializer"); - * props.put("value.serializer", "org.apache.kafka.common.serializers.StringSerializer"); + * props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); + * props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); * KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props); * consumer.subscribe("foo", "bar"); * int commitInterval = 200;