spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject spark git commit: Little typo
Date Fri, 03 Aug 2018 22:39:43 GMT
Repository: spark
Updated Branches:
  refs/heads/master 92b48842b -> 8c14276c3

Little typo

## What changes were proposed in this pull request?
Fixed little typo for a comment

## How was this patch tested?
Manual test

Please review before opening a pull request.

Author: Onwuka Gideon <>

Closes #21992 from dongido001/patch-1.


Branch: refs/heads/master
Commit: 8c14276c3362798b030db7a9fcdc31a10d04b643
Parents: 92b4884
Author: Onwuka Gideon <>
Authored: Fri Aug 3 17:39:40 2018 -0500
Committer: Sean Owen <>
Committed: Fri Aug 3 17:39:40 2018 -0500

 python/pyspark/streaming/ | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/python/pyspark/streaming/ b/python/pyspark/streaming/
index a451582..3fa57ca 100644
--- a/python/pyspark/streaming/
+++ b/python/pyspark/streaming/
@@ -222,7 +222,7 @@ class StreamingContext(object):
         Set each DStreams in this context to remember RDDs it generated
         in the last given duration. DStreams remember RDDs only for a
         limited duration of time and releases them for garbage collection.
-        This method allows the developer to specify how to long to remember
+        This method allows the developer to specify how long to remember
         the RDDs (if the developer wishes to query old data outside the
         DStream computation).
@@ -287,7 +287,7 @@ class StreamingContext(object):
     def queueStream(self, rdds, oneAtATime=True, default=None):
-        Create an input stream from an queue of RDDs or list. In each batch,
+        Create an input stream from a queue of RDDs or list. In each batch,
         it will process either one or all of the RDDs returned by the queue.
         .. note:: Changes to the queue after the stream is created will not be recognized.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message