spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matei Zaharia (JIRA)" <>
Subject [jira] [Commented] (SPARK-2048) Optimizations to CPU usage of external spilling code
Date Thu, 17 Jul 2014 05:53:04 GMT


Matei Zaharia commented on SPARK-2048:

I added one more issue to this BTW, about EAOM creating a new update closure each time a key-value
pair is added. Not that horrible but it does allocate memory.

> Optimizations to CPU usage of external spilling code
> ----------------------------------------------------
>                 Key: SPARK-2048
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Matei Zaharia
> In the external spilling code in ExternalAppendOnlyMap and CoGroupedRDD, there are a
few opportunities for optimization:
> - There are lots of uses of pattern-matching on Tuple2 (e.g. val (k, v) = pair), which
we found to be much slower than accessing fields directly
> - Hash codes for each element are computed many times in StreamBuffer.minKeyHash, which
will be expensive for some data types
> - Uses of buffer.remove() may be expensive if there are lots of hash collisions (better
to swap in the last element into that position)
> - More objects are allocated than is probably necessary, e.g. ArrayBuffers and pairs
> - Because ExternalAppendOnlyMap is only given one key-value pair at a time, it allocates
a new update function on each one, unlike the way we pass a single update function to AppendOnlyMap
in Aggregator
> These should help because situations where we're spilling are also ones where there is
presumably a lot of GC pressure in the new generation.

This message was sent by Atlassian JIRA

View raw message