hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Fernando Padilla <f...@alum.mit.edu>
Subject Re: key/value after reduce
Date Tue, 12 Feb 2008 21:33:56 GMT
Well.. I'm no hadoop expert, but let me brainstorm for a little bit:

Aren't there Output classes that take a key-value pair as input, then 
they get to decide how/what to actually output.  That's how you can 
direct the output directly to HBase, etc..

You could create (and hadoop should include by default), a 
ValueOutputEncoder, that all it does it output the values, ignoring the 
key part..  Thus you get what you want.. not necessarily requiring a 
key/value pair output.

You could even have an outputter that took an InputStream as the Value 
part.. so that it could stream the output..?? possibly?

How far off is this idea?

There is also nothing holding you back from having your Reducer output 
directly to another data/store system.  Then "output" of the reducer job 
would be empty, or for debug maybe the content-length of what it put in 
a different file.. :)




But keep in mind, I think the BIG idea behind Hadoop is divide and 
conquer.  That means arbitrarily cut up input, transform it once, sort, 
transform it once more, output.  But the idea is that this should 
hopefully support N different output files.  I am guessing the key/value 
pair arrangement gives those output files context and meaning, or you 
wouldn't be able to conceptually put them back together into a coherent 
collection of data.

I just remembered, you can force it to only use 1 Reduce job, thus only 
one output file, but that won't scale perfectly.. :)  But for your 
purposes, you could have M map jobs, 1 Reduce job, and use a 
ValueOutputEncoder that ignores the key part and only spits out a binary 
file.. :)








Yuri Pradkin wrote:
> But OTOH, if I wanted my reducer to write binary output, I'd be 
> screwed, especially so in the streaming world (where I'd like to stay 
> for the moment).
> 
> Actually, I don't think I understand your point: if the reducer's 
> output is in a key/value format, you still can run another map over it 
> or another reduce, can't you?  If the output isn't, you can't; it's up 
> to the user who coded up the Reducer.  What am I missing?
> 
> Thanks,
> 
>   -Yuri
> 
> On Tue 12 2008, Miles Osborne wrote:
>> You may well have another Map operation operate over the Reducer
>> output, in which case you'd want key-value pairs.
>>
>> Miles
>>
>> On 12/02/2008, Yuri Pradkin <yuri@isi.edu> wrote:
>>> Hi,
>>>
>>> I'm relatively new to Hadoop and I have what I hope is a simple
>>> question:
>>>
>>> I don't understand why the key/value assumption is preserved AFTER
>>> the reduce operation, in other words why the output of a reducer
>>> is expected as <key,value> instead of arbitrary, possibly binary
>>> bytes? Why can't OutputCollector just give those raw bytes to the
>>> RecordWriter and have it make sense of them as it pleases, or just
>>> dump them to a file?
>>>
>>> This seems like an unnecessary restriction to me, at least at the
>>> first glance.
>>>
>>> Thanks,
>>>
>>>   -Yuri
> 
> 

Mime
View raw message