hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Abhishek Verma <vermaabhish...@gmail.com>
Subject Re: SCALING GENETIC ALGORITHMS USING MAPREDUCE
Date Tue, 19 Jan 2010 23:11:45 GMT
Hi Alex,

On Sun, Jan 17, 2010 at 2:57 AM, Alex Baranov <alex.baranov.v@gmail.com>wrote:

> Hello,
>
> I've read the paper and here is my question:
>
> Why not just produce pairs (random int, individual with fitness) from Map
> function? Thus individuals will be shuffled randomly after Map phase and
> there won't be the need to override the partitioner.
>
That is a neat trick and would work.

P.S. Mentioning GFS in "An accompanying distributed file system like GFS [8]
> makes the data management scalable and fault tolerant."  can confuse some
> readers because the paper is based on Hadoop family (and further HDFS name
> used).

I tend to use MapReduce and GFS through out the paper and then mention
Hadoop and HDFS in the implementation section as a concrete example. I
apologize if the paper didn't make that clear.

-- 
-Abhishek.
http://verma7.com

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message