spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matthew Farrellee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-1284) pyspark hangs after IOError on Executor
Date Wed, 02 Jul 2014 14:12:24 GMT

    [ https://issues.apache.org/jira/browse/SPARK-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049937#comment-14049937
] 

Matthew Farrellee commented on SPARK-1284:
------------------------------------------

[~jblomo] -

will you add a reproducer script to this issue?

i did a simple test based on what you suggested w/ the tip of master and could not reproduce
-

{code}
$ ./dist/bin/pyspark
Python 2.7.5 (default, Feb 19 2014, 13:47:28) 
[GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
...
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 1.0.0-SNAPSHOT
      /_/

Using Python version 2.7.5 (default, Feb 19 2014 13:47:28)
SparkContext available as sc.
>>> data = sc.textFile('/etc/passwd')
14/07/02 07:03:59 INFO MemoryStore: ensureFreeSpace(32816) called with curMem=0, maxMem=308910489
14/07/02 07:03:59 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated
size 32.0 KB, free 294.6 MB)
>>> data.cache()
/etc/passwd MappedRDD[1] at textFile at NativeMethodAccessorImpl.java:-2
>>> data.take(10)
...[expected output]...
>>> data.flatMap(lambda line: line.split(':')).map(lambda word: (word, 1)).reduceByKey(lambda
x, y: x + y).collect()
...[expected output, no hang]...
{code}

> pyspark hangs after IOError on Executor
> ---------------------------------------
>
>                 Key: SPARK-1284
>                 URL: https://issues.apache.org/jira/browse/SPARK-1284
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>            Reporter: Jim Blomo
>
> When running a reduceByKey over a cached RDD, Python fails with an exception, but the
failure is not detected by the task runner.  Spark and the pyspark shell hang waiting for
the task to finish.
> The error is:
> {code}
> PySpark worker failed with exception:
> Traceback (most recent call last):
>   File "/home/hadoop/spark/python/pyspark/worker.py", line 77, in main
>     serializer.dump_stream(func(split_index, iterator), outfile)
>   File "/home/hadoop/spark/python/pyspark/serializers.py", line 182, in dump_stream
>     self.serializer.dump_stream(self._batched(iterator), stream)
>   File "/home/hadoop/spark/python/pyspark/serializers.py", line 118, in dump_stream
>     self._write_with_length(obj, stream)
>   File "/home/hadoop/spark/python/pyspark/serializers.py", line 130, in _write_with_length
>     stream.write(serialized)
> IOError: [Errno 104] Connection reset by peer
> 14/03/19 22:48:15 INFO scheduler.TaskSetManager: Serialized task 4.0:0 as 4257 bytes
in 47 ms
> Traceback (most recent call last):
>   File "/home/hadoop/spark/python/pyspark/daemon.py", line 117, in launch_worker
>     worker(listen_sock)
>   File "/home/hadoop/spark/python/pyspark/daemon.py", line 107, in worker
>     outfile.flush()
> IOError: [Errno 32] Broken pipe
> {code}
> I can reproduce the error by running take(10) on the cached RDD before running reduceByKey
(which looks at the whole input file).
> Affects Version 1.0.0-SNAPSHOT (4d88030486)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message