spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marcelo Vanzin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-1476) 2GB limit in spark for blocks
Date Thu, 19 Feb 2015 22:48:12 GMT

    [ https://issues.apache.org/jira/browse/SPARK-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14328248#comment-14328248
] 

Marcelo Vanzin commented on SPARK-1476:
---------------------------------------

Hi [~irashid],

Approach sounds good. It would be nice to measure whether the optimization for smaller blocks
actually makes a difference; from what I can tell, supporting multiple ByteBuffer instances
just means having an array and picking the right ByteBuffer based on an offset, both of which
should be pretty cheap.

> 2GB limit in spark for blocks
> -----------------------------
>
>                 Key: SPARK-1476
>                 URL: https://issues.apache.org/jira/browse/SPARK-1476
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>         Environment: all
>            Reporter: Mridul Muralidharan
>            Assignee: Mridul Muralidharan
>            Priority: Critical
>         Attachments: 2g_fix_proposal.pdf
>
>
> The underlying abstraction for blocks in spark is a ByteBuffer : which limits the size
of the block to 2GB.
> This has implication not just for managed blocks in use, but also for shuffle blocks
(memory mapped blocks are limited to 2gig, even though the api allows for long), ser-deser
via byte array backed outstreams (SPARK-1391), etc.
> This is a severe limitation for use of spark when used on non trivial datasets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message