hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David J. Biesack" <David.Bies...@sas.com>
Subject Re: Specifying Input conditon to split file or specifying map tasks to work on assigned files individually
Date Tue, 24 Jul 2007 12:24:44 GMT
> Date: Mon, 23 Jul 2007 21:42:34 -0700 (PDT)
> From: novice user <pallavip.05@gmail.com>
> Hi,
>  I am exploring hadoop and using it for one of my machine learning
> application.
>  I have a problem in which I need to route a particular input to each map
> task separately. For example, I have list of <key, value >pairs sorted on
> some condition in an input file. I want to split the input file on some
> condition (for example, all key,value pairs which have the same key should
> be given as input to a particular map task). I want to do this, so that all
> the necessary extra information related to that input can be loaded into
> memory once in that map task so that my map procedure will be faster.

This sounds like you can put your map processing into your Reduce operation,
since Hadoop will already pass all values with the same key to your reducer.
Thus, an Identity map may suffice. (I've not tried running a job without
specifying a Map; maybe Hadoop works without one, in which case you do
not even need the Map.) In the case where you want to partition on 
other condition, can you not simply do that by mapping your keys into
the different enumerated values (perhaps pushing the old key into the
output value).  If I'm of the mark, sorry if I misinterpreted your problem statement.

David J. Biesack     SAS Institute Inc.
(919) 531-7771       SAS Campus Drive
http://www.sas.com   Cary, NC 27513

View raw message