spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From joshro...@apache.org
Subject git commit: [SPARK-3035] Wrong example with SparkContext.addFile
Date Sat, 16 Aug 2014 23:48:46 GMT
Repository: spark
Updated Branches:
  refs/heads/master ac6411c6e -> 379e7585c


[SPARK-3035] Wrong example with SparkContext.addFile

https://issues.apache.org/jira/browse/SPARK-3035

fix for wrong document.

Author: iAmGhost <kdh7807@gmail.com>

Closes #1942 from iAmGhost/master and squashes the following commits:

487528a [iAmGhost] [SPARK-3035] Wrong example with SparkContext.addFile fix for wrong document.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/379e7585
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/379e7585
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/379e7585

Branch: refs/heads/master
Commit: 379e7585c356f20bf8b4878ecba9401e2195da12
Parents: ac6411c
Author: iAmGhost <kdh7807@gmail.com>
Authored: Sat Aug 16 16:48:38 2014 -0700
Committer: Josh Rosen <joshrosen@apache.org>
Committed: Sat Aug 16 16:48:38 2014 -0700

----------------------------------------------------------------------
 python/pyspark/context.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/379e7585/python/pyspark/context.py
----------------------------------------------------------------------
diff --git a/python/pyspark/context.py b/python/pyspark/context.py
index 4001eca..6c04923 100644
--- a/python/pyspark/context.py
+++ b/python/pyspark/context.py
@@ -613,7 +613,7 @@ class SparkContext(object):
         >>> def func(iterator):
         ...    with open(SparkFiles.get("test.txt")) as testFile:
         ...        fileVal = int(testFile.readline())
-        ...        return [x * 100 for x in iterator]
+        ...        return [x * fileVal for x in iterator]
         >>> sc.parallelize([1, 2, 3, 4]).mapPartitions(func).collect()
         [100, 200, 300, 400]
         """


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message