cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cassa L <lcas...@gmail.com>
Subject Re: Spark Memory Error - Not enough space to cache broadcast
Date Tue, 14 Jun 2016 22:48:48 GMT
Hi,
I would appreciate any clue on this. It has become a bottleneck for our
spark job.

On Mon, Jun 13, 2016 at 2:56 PM, Cassa L <lcassa8@gmail.com> wrote:

> Hi,
>
> I'm using spark 1.5.1 version. I am reading data from Kafka into Spark and writing it
into Cassandra after processing it. Spark job starts fine and runs all good for some time
until I start getting below errors. Once these errors come, job start to lag behind and I
see that job has scheduling and processing delays in streaming  UI.
>
> Worker memory is 6GB, executor-memory is 5GB, I also tried to tweak memoryFraction parameters.
Nothing works.
>
>
> 16/06/13 21:26:02 INFO MemoryStore: ensureFreeSpace(4044) called with curMem=565394,
maxMem=2778495713
> 16/06/13 21:26:02 INFO MemoryStore: Block broadcast_69652_piece0 stored as bytes in memory
(estimated size 3.9 KB, free 2.6 GB)
> 16/06/13 21:26:02 INFO TorrentBroadcast: Reading broadcast variable 69652 took 2 ms
> 16/06/13 21:26:02 WARN MemoryStore: Failed to reserve initial memory threshold of 1024.0
KB for computing block broadcast_69652 in memory.
> 16/06/13 21:26:02 WARN MemoryStore: Not enough space to cache broadcast_69652 in memory!
(computed 496.0 B so far)
> 16/06/13 21:26:02 INFO MemoryStore: Memory use = 556.1 KB (blocks) + 2.6 GB (scratch
space shared across 0 tasks(s)) = 2.6 GB. Storage limit = 2.6 GB.
> 16/06/13 21:26:02 WARN MemoryStore: Persisting block broadcast_69652 to disk instead.
> 16/06/13 21:26:02 INFO BlockManager: Found block rdd_100761_1 locally
> 16/06/13 21:26:02 INFO Executor: Finished task 0.0 in stage 71577.0 (TID 452316). 2043
bytes result sent to driver
>
>
> Thanks,
>
> L
>
>

Mime
View raw message