hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Evans <ev...@yahoo-inc.com>
Subject Re: reducers and data locality
Date Fri, 27 Apr 2012 15:22:36 GMT
Also generating random keys/partitions can be problematic.  Although the problems are rare.
 A mapper can be restarted after it finishes successfully if the machine it was on goes down
or has other problems so that the reducers and not able to get that mapper's output data.
 If this happens while some of the reducers have finished fetching it, but not all of them,
and the new mapper partitions things differently some records may show up twice in your output
and others not at all.

If you do something like random for the partitioning make sure that you use a constant seed
so that it is deterministic.

--Bobby Evans

On 4/27/12 4:24 AM, "Bejoy KS" <bejoy.hadoop@gmail.com> wrote:

Hi Mete

A custom Paritioner class can control the flow of keys to the desired reducer. It gives you
more control on which key to which reducer.

Bejoy KS

Sent from handheld, please excuse typos.

-----Original Message-----
From: mete <efkarr@gmail.com>
Date: Fri, 27 Apr 2012 09:19:21
To: <common-user@hadoop.apache.org>
Reply-To: common-user@hadoop.apache.org
Subject: reducers and data locality

Hello folks,

I have a lot of input splits (10k-50k - 128 mb blocks) which contains text
files. I need to process those line by line, then copy the result into
roughly equal size of "shards".

So i generate a random key (from a range of [0:numberOfShards]) which is
used to route the map output to different reducers and the size is more
less equal.

I know that this is not really efficient and i was wondering if i could
somehow control how keys are routed.
For example could i generate the randomKeys with hostname prefixes and
control which keys are sent to each reducer? What do you think?

Kind regards

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message