hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sharat attupurath <shara...@hotmail.com>
Subject RE: Ant Colony Optimization for Travelling Salesman Problem in Hadoop
Date Fri, 04 May 2012 16:54:38 GMT


Thanks a lot for the quick reply sir! We are new to Apache Hadoop and haven't yet understood
it properly yet. Can you please elaborate on how we can have multiple stages of mapreduce
jobs for combining the trails as you have mentioned? 

We have been trying to find out how to write a custom splitter and almost all online resources
tell to subclass the FileInputFormat and write only a custom RecordReader? Will it be possible
to generate our splits in that way?



Date: Fri, 4 May 2012 09:22:51 -0700
Subject: Re: Ant Colony Optimization for Travelling Salesman Problem in Hadoop
From: lordjoe2000@gmail.com
To: mapreduce-user@hadoop.apache.org

On Fri, May 4, 2012 at 9:06 AM, sharat attupurath <sharat_a@hotmail.com> wrote:


We are trying to parallelize the ant colony optimization 
algorithm for TSP over hadoop and are facing some issues. We are using 
TSPLIB as input files. The input is a text file containing eucledian 
coordinates of the cities - first column is city number and the next two
 columns contain the x and y coordinates respectively. 

What we 
intend to do is to take input from this single file, send copies of it 
to multiple mappers (each mapper acts like the ant in the algorithm), 
each mapper works on its input to find its own TSP solution that it 
outputs and finally the reducer outputs the smallest tour found by the 
mappers. Hope we are in the right track. Here are the issues:

Since the input file is small, we need to force hadoop to fire up 
multiple map tasks by replicating the input. How can we make an 
InputSplit of the whole file and replicate it so that the input can be 
sent to multiple mappers?
Write a custom splitter sending the same data to all mappers - the only critical criteria
is the number of "Ants" 

2) the algorithm uses a shared 
pheromone array and each mapper needs to read and write data from this. 
How can we share the pheromone data across the mappers.
You cannot share data across mappers and should not attempt to do so - Better to use the reducer(s)
to combine trails  combine first pass trails and
then pass the combined trails to another mapreduce job - this with the original problem plus
the current pheremone trails

Hope the questions are clear enough. Any help would be greatly appreciated. 

Thank you



Steven M. Lewis PhD4221 105th Ave NEKirkland, WA 98033206-384-1340 (cell)
Skype lordjoe_com

View raw message