spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (SPARK-6695) Add an external iterator: a hadoop-like output collector
Date Tue, 07 Apr 2015 10:22:12 GMT

     [ https://issues.apache.org/jira/browse/SPARK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sean Owen resolved SPARK-6695.
------------------------------
    Resolution: Won't Fix

I suppose my problem with that is that it would be duplicating Spark's spill mechanism and
leaves open the questions I put forth above about cleanup. Spark functions aren't "supposed"
to need a huge amount of memory all at once, and so I imagine the solution in every case is
to redesign the method.

> Add an external iterator: a hadoop-like output collector
> --------------------------------------------------------
>
>                 Key: SPARK-6695
>                 URL: https://issues.apache.org/jira/browse/SPARK-6695
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>            Reporter: uncleGen
>
> In practical use, we usually need to create a big iterator, which means too big in `memory
usage` or too long in `array size`. On the one hand, it leads to too much memory consumption.
On the other hand, one `Array` may not hold all the elements, as java array indices are of
type 'int' (4 bytes or 32 bits). So, IMHO, we may provide a `collector`, which has a buffer,
100MB or any others, and could spill data into disk. The use case may like:
> {code: borderStyle=solid}
>    rdd.mapPartition { it => 
>       ...
>       val collector = new ExternalCollector()
>       collector.collect(a)
>       ...
>       collector.iterator
>   }
>    
> {code}
> I have done some related works, and I need your opinions, thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message