hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Baranov <alex.barano...@gmail.com>
Date Sun, 17 Jan 2010 08:57:34 GMT

I've read the paper and here is my question:

Why not just produce pairs (random int, individual with fitness) from Map
function? Thus individuals will be shuffled randomly after Map phase and
there won't be the need to override the partitioner.


P.S. Mentioning GFS in "An accompanying distributed file system like GFS [8]
makes the data management scalable and fault tolerant."  can confuse some
readers because the paper is based on Hadoop family (and further HDFS name

On Thu, Jan 14, 2010 at 1:36 AM, Alberto Luengo Cabanillas <
cabiwan@gmail.com> wrote:

> Hi everyone! For the last six months. my work with Hadoop is being focused
> in developing a stable MRPGA. Last paper I read ("Scaling Genetic
> Algorithms
> Using MapReduce") was a fantastic job and gave me a bunch of ideas; but I
> have some questions relative to this paper and I think they may be useful
> for community:
> Anywhere in the paper talks about elitism rate nor mutation rate. It only
> talks about selection and crossover. In fact, this part (page 3 and so)
> talks about an INDIVIDUALREPRESENTATION(key) function, which I suppose is
> used to represent the key part of the par (i.e., if it is Text, its
> representation is a String). Also there are TOURN(tournArray) (?) and
> CROSSOVER(crossArray), which I think is related to mutation.
> How it is supposed to be implemented the mutation part in the process?.
> Have
> you considered some kind of elitism rate for chossing population?.
> Thanks a lot in advance.
> --
> Alberto

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message