spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Reynold Xin (JIRA)" <>
Subject [jira] [Updated] (SPARK-6728) Improve performance of py4j for large bytearray
Date Wed, 26 Aug 2015 00:32:45 GMT


Reynold Xin updated SPARK-6728:
    Target Version/s: 1.6.0  (was: 1.5.0)

> Improve performance of py4j for large bytearray
> -----------------------------------------------
>                 Key: SPARK-6728
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 1.3.0
>            Reporter: Davies Liu
>            Priority: Critical
> PySpark relies on py4j to transfer function arguments and return between Python and JVM,
it's very slow to pass a large bytearray (larger than 10M). 
> In MLlib, it's possible to have a Vector with more than 100M bytes, which will need few
GB memory, may crash.
> The reason is that py4j use text protocol, it will encode the bytearray as base64, and
do multiple string concat. 
> Binary will help a lot, create a issue for py4j:

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message