hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pols cut <pols_...@yahoo.co.in>
Subject Re: writable class to be used to read floating point values from input?
Date Mon, 27 Oct 2008 01:23:47 GMT
Thanks ..

I converted the text-->string --> Float.

I am trying to calculate the average of a very large set of numbers. You are right...I plan
to use a dummy key (its not null as i said before) as input to reduce. Then in reduce when
sorted, i will have a single record as <key,<n1,n2,n3...>> which i will use to
calculate the avg.

Regards,

Pavan






________________________________
From: Owen O'Malley <omalley@apache.org>
To: core-user@hadoop.apache.org
Sent: Sunday, 26 October, 2008 1:24:43 AM
Subject: Re: writable class to be used to read floating point values from input?


On Oct 25, 2008, at 8:32 PM, pols cut wrote:

> I am trying to write a map reduce function which takes take the  
> following types of <key,value> pairs
>
>
> Map function -- should read floating point values (i dont really  
> care about  key)
> it should output <null,floatwritable>

If the input is stored in a text file, using TextInputFormat is right.  
Your map inputs will be:

LongWritable, Text

Just use the Text and convert it to a Double.

> reduce -- input- <null,floatwritable>
>                output <null,floatwritable>

This doesn't make any sense. How should the input to the reduce be  
sorted? By the float? In that case, it would be:

FloatWritable, NullWritable

You will get one call to the reduce for each distinct float value the  
maps generate. The reduce can iterate through the NullWritables to see  
how many times that key was generated.

-- Owen



      Bollywood news, movie reviews, film trailers and more! Go to http://in.movies.yahoo.com/
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message