spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Ash (JIRA)" <>
Subject [jira] [Updated] (SPARK-2048) Optimizations to CPU usage of external spilling code
Date Sun, 07 Sep 2014 08:35:29 GMT


Andrew Ash updated SPARK-2048:
    Fix Version/s: 1.1.0

> Optimizations to CPU usage of external spilling code
> ----------------------------------------------------
>                 Key: SPARK-2048
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Matei Zaharia
>             Fix For: 1.1.0
> In the external spilling code in ExternalAppendOnlyMap and CoGroupedRDD, there are a
few opportunities for optimization:
> - There are lots of uses of pattern-matching on Tuple2 (e.g. val (k, v) = pair), which
we found to be much slower than accessing fields directly
> - Hash codes for each element are computed many times in StreamBuffer.minKeyHash, which
will be expensive for some data types
> - Uses of buffer.remove() may be expensive if there are lots of hash collisions (better
to swap in the last element into that position)
> - More objects are allocated than is probably necessary, e.g. ArrayBuffers and pairs
> - Because ExternalAppendOnlyMap is only given one key-value pair at a time, it allocates
a new update function on each one, unlike the way we pass a single update function to AppendOnlyMap
in Aggregator
> These should help because situations where we're spilling are also ones where there is
presumably a lot of GC pressure in the new generation.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message