Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 54AD0DA60 for ; Mon, 3 Sep 2012 16:31:42 +0000 (UTC) Received: (qmail 49563 invoked by uid 500); 3 Sep 2012 16:31:37 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 49481 invoked by uid 500); 3 Sep 2012 16:31:37 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 49474 invoked by uid 99); 3 Sep 2012 16:31:37 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Sep 2012 16:31:37 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of dechouxb@gmail.com designates 209.85.216.176 as permitted sender) Received: from [209.85.216.176] (HELO mail-qc0-f176.google.com) (209.85.216.176) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Sep 2012 16:31:29 +0000 Received: by qcsc21 with SMTP id c21so4246881qcs.35 for ; Mon, 03 Sep 2012 09:31:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=daXbKH26kKmTdw4FIrNvFzoW4u3WruVSQvul9swjDZk=; b=Rg+Wgs8/qgBsxKY+Vym5UcsVIXTDNpb/rYsEVLAMnhFMJTPP5wNEb4uwRKpHFZvjno KRmRAy1AED0tnaqeLdEyaZI0wX3SGEEa5UJAhUlbSa02ly0XEs/2QRTomxWXFh81XOHK tlYOFcoFLBYB6YpuZVkW7EBe9ys2zKki75N6+5zCVPx0CO6hVmjzHjvHAtEBvoD2GKiR RNuWgKvXLj4lcKTBSQfsxkzzmrC1/Bc2EkUSNqSnsACyEBGOH5jbFFvOWUjbUu6m2949 dN1V3vvvGwo+dwEaRsgENTkYW2ycQhEtGmyOBQvC8Re0J3wQ0SVKt/vPm9iwsKxGHbIg G09A== MIME-Version: 1.0 Received: by 10.224.179.205 with SMTP id br13mr35298497qab.95.1346689868967; Mon, 03 Sep 2012 09:31:08 -0700 (PDT) Received: by 10.49.71.231 with HTTP; Mon, 3 Sep 2012 09:31:08 -0700 (PDT) In-Reply-To: References: Date: Mon, 3 Sep 2012 18:31:08 +0200 Message-ID: Subject: Re: how to execute different tasks on data nodes(simultaneously in hadoop). From: Bertrand Dechoux To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=485b397dcd5b432e1e04c8ceab58 --485b397dcd5b432e1e04c8ceab58 Content-Type: text/plain; charset=ISO-8859-1 You can check the value of "map.input.file" in order to apply a different logic for each type of files (in the mapper). More information about your problem/context would help the readers to provide a more extensive reply. Regards Bertrand On Mon, Sep 3, 2012 at 6:25 PM, Michael Segel wrote: > Not sure what you are trying to do... > > You want to pass through the entire data set on all nodes where each node > runs a single filter? > > You're thinking is orthogonal to how Hadoop works. > > You would be better off letting each node work on a portion of the data > which is local to that node running the entire filter set. > > > On Sep 3, 2012, at 11:19 AM, mallik arjun wrote: > > > genrally in hadoop map function will be exeucted by all the data nodes > on the input data set ,against this how can i do the following. > > i have some filter programs , and what i want to do is each data > node(slave) has to execute one filter alogrithm simultaneously, diffent > from other data nodes executions. > > > > thanks in advance. > > > > > > -- Bertrand Dechoux --485b397dcd5b432e1e04c8ceab58 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable You can check the value of "map.input.file" in order to apply a d= ifferent logic for each type of files (in the mapper).
More information = about your problem/context would help the readers to provide a more extensi= ve reply.

Regards

Bertrand

On Mon, Sep 3= , 2012 at 6:25 PM, Michael Segel <michael_segel@hotmail.com>= ; wrote:
Not sure what you are trying to do...

You want to pass through the entire data set on all nodes where each node r= uns a single filter?

You're thinking is orthogonal to how Hadoop works.

You would be better off letting each node work on a portion of the data whi= ch is local to that node running the entire filter set.


On Sep 3, 2012, at 11:19 AM, mallik arjun <mallik.cloud@gmail.com> wrote:

> genrally in hadoop map function will be exeucted by all the data nodes= on the input data set ,against this how can i do the following.
> i have some filter programs , and what i want to do is each data node(= slave) has to execute one filter alogrithm =A0simultaneously, diffent from = other data nodes executions.
>
> thanks in advance.
>
>




--
Bertrand De= choux
--485b397dcd5b432e1e04c8ceab58--