spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sergei Lebedev (JIRA)" <j...@apache.org>
Subject [jira] [Created] (SPARK-22062) BlockManager does not account for memory consumed by remote fetches
Date Tue, 19 Sep 2017 15:22:00 GMT
Sergei Lebedev created SPARK-22062:
--------------------------------------

             Summary: BlockManager does not account for memory consumed by remote fetches
                 Key: SPARK-22062
                 URL: https://issues.apache.org/jira/browse/SPARK-22062
             Project: Spark
          Issue Type: Bug
          Components: Block Manager
    Affects Versions: 2.2.0
            Reporter: Sergei Lebedev
            Priority: Minor


We use Spark exclusively with {{StorageLevel.DiskOnly}} as our workloads are very sensitive
to memory usage. Recently, we've spotted that the jobs sometimes OOM leaving lots of byte[]
arrays on the heap. Upon further investigation, we've found that the arrays come from {{BlockManager.getRemoteBytes}},
which [calls|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/storage/BlockManager.scala#L638]
{{BlockTransferService.fetchBlockSync}}, which in its turn would [allocate|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/network/BlockTransferService.scala#L99]
an on-heap {{ByteBuffer}} of the same size as the block (e.g. full partition), if the block
was successfully retrieved over the network.

This memory is not accounted towards Spark storage/execution memory and could potentially
lead to OOM if {{BlockManager}} fetches too many partitions in parallel. I wonder if this
is intentional behaviour, or in fact a bug?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message