spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yin Huai (JIRA)" <>
Subject [jira] [Updated] (SPARK-6728) Improve performance of py4j for large bytearray
Date Mon, 06 Apr 2015 22:57:12 GMT


Yin Huai updated SPARK-6728:
    Affects Version/s: 1.3.0

> Improve performance of py4j for large bytearray
> -----------------------------------------------
>                 Key: SPARK-6728
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 1.3.0
>            Reporter: Davies Liu
> PySpark relies on py4j to transfer function arguments and return between Python and JVM,
it's very slow to pass a large bytearray (larger than 10M). 
> In MLlib, it's possible to have a Vector with more than 100M bytes, which will need few
GB memory, may crash.
> The reason is that py4j use text protocol, it will encode the bytearray as base64, and
do multiple string concat. 
> Binary will help a lot, create a issue for py4j:

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message