crunch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gabriel Reid (JIRA)" <>
Subject [jira] [Commented] (CRUNCH-351) Improve performance of Shard#shard on large records
Date Sun, 23 Feb 2014 11:41:19 GMT


Gabriel Reid commented on CRUNCH-351:

I think a constant random seed is effectively same as using a increasing key and passing records
to reducers in round-robin. The general drawback is that all mapper will produce the same
sequence. For this particular problem, I think using the round-robin approach is OK and simpler.

That makes a lot of sense. What I actually had in mind about switching to int was to use a
way smaller range of keys, to do something like 
count = (++count % (numPartitions * 3));

with the idea of having a really small number of different keys so that sorting the keys within
each partition would require almost no processing. On the other hand, that idea is likely
such a micro-optimization that it wouldn't make any noticeable difference, so what you've
got here looks good to me.

> Improve performance of Shard#shard on large records
> ---------------------------------------------------
>                 Key: CRUNCH-351
>                 URL:
>             Project: Crunch
>          Issue Type: Improvement
>            Reporter: Chao Shi
>            Assignee: Chao Shi
>         Attachments: crunch-351-v2.patch, crunch-351.patch
>     This avoids sorting on the input data, which may be long and make
>     shuffle phase slow. The improvement is to sort on pseudo-random numbers.

This message was sent by Atlassian JIRA

View raw message