spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeffrey Jedele <jeffrey.jed...@gmail.com>
Subject Re: SparkStreaming failing with exception Could not compute split, block input
Date Fri, 27 Feb 2015 10:46:01 GMT
I don't have an idea, but perhaps a little more context would be helpful.

What is the source of your streaming data? What's the storage level you're
using?
What are you doing? Some kind of windows operations?

Regards,
Jeff

2015-02-26 18:59 GMT+01:00 Mukesh Jha <me.mukesh.jha@gmail.com>:

>
> On Wed, Feb 25, 2015 at 8:09 PM, Mukesh Jha <me.mukesh.jha@gmail.com>
> wrote:
>
>> My application runs fine for ~3/4 hours and then hits this issue.
>>
>> On Wed, Feb 25, 2015 at 11:34 AM, Mukesh Jha <me.mukesh.jha@gmail.com>
>> wrote:
>>
>>> Hi Experts,
>>>
>>> My Spark Job is failing with below error.
>>>
>>> From the logs I can see that input-3-1424842351600 was added at 5:32:32
>>> and was never purged out of memory. Also the available free memory for the
>>> executor is *2.1G*.
>>>
>>> Please help me figure out why executors cannot fetch this input.
>>>
>>> Txz for any help, Cheers.
>>>
>>>
>>> *Logs*
>>> 15/02/25 05:32:32 INFO storage.BlockManagerInfo: Added
>>> input-3-1424842351600 in memory on
>>> chsnmphbase31.usdc2.oraclecloud.com:50208 (size: 276.1 KB, free: 2.1 GB)
>>> .
>>> .
>>> 15/02/25 05:32:43 INFO storage.BlockManagerInfo: Added
>>> input-1-1424842362600 in memory on chsnmphbase30.usdc2.cloud.com:35919
>>> (size: 232.3 KB, free: 2.1 GB)
>>> 15/02/25 05:32:43 INFO storage.BlockManagerInfo: Added
>>> input-4-1424842363000 in memory on chsnmphbase23.usdc2.cloud.com:37751
>>> (size: 291.4 KB, free: 2.1 GB)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 32.1 in
>>> stage 451.0 (TID 22511, chsnmphbase19.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 37.1 in
>>> stage 451.0 (TID 22512, chsnmphbase23.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 31.1 in
>>> stage 451.0 (TID 22513, chsnmphbase30.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 34.1 in
>>> stage 451.0 (TID 22514, chsnmphbase26.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 36.1 in
>>> stage 451.0 (TID 22515, chsnmphbase19.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 39.1 in
>>> stage 451.0 (TID 22516, chsnmphbase23.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 30.1 in
>>> stage 451.0 (TID 22517, chsnmphbase30.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 33.1 in
>>> stage 451.0 (TID 22518, chsnmphbase26.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 35.1 in
>>> stage 451.0 (TID 22519, chsnmphbase19.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 INFO scheduler.TaskSetManager: Starting task 38.1 in
>>> stage 451.0 (TID 22520, chsnmphbase23.usdc2.cloud.com, RACK_LOCAL, 1288
>>> bytes)
>>> 15/02/25 05:32:43 WARN scheduler.TaskSetManager: Lost task 32.1 in stage
>>> 451.0 (TID 22511, chsnmphbase19.usdc2.cloud.com): java.lang.Exception:
>>> Could not compute split, block input-3-1424842351600 not found
>>>         at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>>>         at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>>>         at
>>> org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>>>         at
>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>>>         at org.apache.spark.scheduler.Task.run(Task.scala:56)
>>>         at
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:745)
>>>
>>> 15/02/25 05:32:43 WARN scheduler.TaskSetManager: Lost task 36.1 in stage
>>> 451.0 (TID 22515, chsnmphbase19.usdc2.cloud.com): java.lang.Exception:
>>> Could not compute split, block input-3-1424842355600 not found
>>>         at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
>>>
>>> --
>>> Thanks & Regards,
>>>
>>> *Mukesh Jha <me.mukesh.jha@gmail.com>*
>>>
>>
>>
>>
>> --
>>
>>
>> Thanks & Regards,
>>
>> *Mukesh Jha <me.mukesh.jha@gmail.com>*
>>
>
>
>
> --
>
>
> Thanks & Regards,
>
> *Mukesh Jha <me.mukesh.jha@gmail.com>*
>

Mime
View raw message