Return-Path: X-Original-To: apmail-spark-commits-archive@minotaur.apache.org Delivered-To: apmail-spark-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 916971816F for ; Mon, 11 Jan 2016 19:29:26 +0000 (UTC) Received: (qmail 43963 invoked by uid 500); 11 Jan 2016 19:29:26 -0000 Delivered-To: apmail-spark-commits-archive@spark.apache.org Received: (qmail 43929 invoked by uid 500); 11 Jan 2016 19:29:26 -0000 Mailing-List: contact commits-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list commits@spark.apache.org Received: (qmail 43920 invoked by uid 99); 11 Jan 2016 19:29:26 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Jan 2016 19:29:26 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 603CAE07BA; Mon, 11 Jan 2016 19:29:26 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: zsxwing@apache.org To: commits@spark.apache.org Message-Id: <14256bdb00474d39820c91e071840f66@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: spark git commit: [STREAMING][MINOR] Typo fixes Date: Mon, 11 Jan 2016 19:29:26 +0000 (UTC) Repository: spark Updated Branches: refs/heads/branch-1.6 d4cfd2acd -> ce906b33d [STREAMING][MINOR] Typo fixes Author: Jacek Laskowski Closes #10698 from jaceklaskowski/streaming-kafka-typo-fixes. (cherry picked from commit b313badaa049f847f33663c61cd70ee2f2cbebac) Signed-off-by: Shixiong Zhu Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ce906b33 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/ce906b33 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/ce906b33 Branch: refs/heads/branch-1.6 Commit: ce906b33de64f55653b52376316aa2625fd86b47 Parents: d4cfd2a Author: Jacek Laskowski Authored: Mon Jan 11 11:29:15 2016 -0800 Committer: Shixiong Zhu Committed: Mon Jan 11 11:29:23 2016 -0800 ---------------------------------------------------------------------- .../main/scala/org/apache/spark/streaming/kafka/KafkaCluster.scala | 2 +- .../src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/ce906b33/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaCluster.scala ---------------------------------------------------------------------- diff --git a/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaCluster.scala b/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaCluster.scala index 8465432..e3a2e57 100644 --- a/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaCluster.scala +++ b/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaCluster.scala @@ -382,7 +382,7 @@ object KafkaCluster { val seedBrokers: Array[(String, Int)] = brokers.split(",").map { hp => val hpa = hp.split(":") if (hpa.size == 1) { - throw new SparkException(s"Broker not the in correct format of : [$brokers]") + throw new SparkException(s"Broker not in the correct format of : [$brokers]") } (hpa(0), hpa(1).toInt) } http://git-wip-us.apache.org/repos/asf/spark/blob/ce906b33/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala ---------------------------------------------------------------------- diff --git a/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala b/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala index ea5f842..4dbaf4f 100644 --- a/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala +++ b/external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala @@ -156,7 +156,7 @@ class KafkaRDD[ var requestOffset = part.fromOffset var iter: Iterator[MessageAndOffset] = null - // The idea is to use the provided preferred host, except on task retry atttempts, + // The idea is to use the provided preferred host, except on task retry attempts, // to minimize number of kafka metadata requests private def connectLeader: SimpleConsumer = { if (context.attemptNumber > 0) { --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org For additional commands, e-mail: commits-help@spark.apache.org