spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kazuaki Ishizaki (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-21501) Spark shuffle index cache size should be memory based
Date Tue, 25 Jul 2017 01:09:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-21501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16099351#comment-16099351
] 

Kazuaki Ishizaki commented on SPARK-21501:
------------------------------------------

I see. I misunderstood the description.
You expect that memory cache would be enabled even when # of entries is larger than {{spark.shuffle.service.index.cache.entries}}
if the total cache size is not large.

> Spark shuffle index cache size should be memory based
> -----------------------------------------------------
>
>                 Key: SPARK-21501
>                 URL: https://issues.apache.org/jira/browse/SPARK-21501
>             Project: Spark
>          Issue Type: Bug
>          Components: Shuffle
>    Affects Versions: 2.1.0
>            Reporter: Thomas Graves
>
> Right now the spark shuffle service has a cache for index files. It is based on a # of
files cached (spark.shuffle.service.index.cache.entries). This can cause issues if people
have a lot of reducers because the size of each entry can fluctuate based on the # of reducers.

> We saw an issues with a job that had 170000 reducers and it caused NM with spark shuffle
service to use 700-800MB or memory in NM by itself.
> We should change this cache to be memory based and only allow a certain memory size used.
When I say memory based I mean the cache should have a limit of say 100MB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message