Hadoop can distribute all the data into HDFS inside MapReduce tasks can work together. which one is goes to which data node and how it works all those things it can maintain each task has own JVM in each data node. JVM can handle hell number of data to process to the all data nodes and then simplifies the each task.
Thanks & Regards,
Hi,Assuming you have to compute these value for every RGB pixel.Why couldn't you compute all these values at the same time on the same node?Hadoop let you distribute your computation but it doesn't mean each node has to compute only a part of the equations.Each node can compute all equations but for a 'small' part of the data.That's Hadoop strategy. That way, sequential read and data locality will improve your performances.RegardsBertrand--On Mon, Sep 3, 2012 at 6:35 PM, mallik arjun <firstname.lastname@example.org> wrote:
On Mon, Sep 3, 2012 at 10:01 PM, Bertrand Dechoux <email@example.com> wrote:
You can check the value of "map.input.file" in order to apply a different logic for each type of files (in the mapper).
More information about your problem/context would help the readers to provide a more extensive reply.
Bertrandeach data node has to process one equation of above simultaneously.--On Mon, Sep 3, 2012 at 6:25 PM, Michael Segel <firstname.lastname@example.org> wrote:Not sure what you are trying to do...
You want to pass through the entire data set on all nodes where each node runs a single filter?
You're thinking is orthogonal to how Hadoop works.
You would be better off letting each node work on a portion of the data which is local to that node running the entire filter set.
On Sep 3, 2012, at 11:19 AM, mallik arjun <email@example.com> wrote:
> genrally in hadoop map function will be exeucted by all the data nodes on the input data set ,against this how can i do the following.
> i have some filter programs , and what i want to do is each data node(slave) has to execute one filter alogrithm simultaneously, diffent from other data nodes executions.
> thanks in advance.