Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E904A11B53 for ; Sat, 23 Aug 2014 17:33:29 +0000 (UTC) Received: (qmail 71528 invoked by uid 500); 23 Aug 2014 17:33:23 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 71402 invoked by uid 500); 23 Aug 2014 17:33:23 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 71391 invoked by uid 99); 23 Aug 2014 17:33:23 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 23 Aug 2014 17:33:23 +0000 X-ASF-Spam-Status: No, hits=2.9 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [79.99.40.130] (HELO auth-smtp-01.streamline.net) (79.99.40.130) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 23 Aug 2014 17:32:55 +0000 Received: by auth-smtp-01.streamline.net (Postfix, from userid 500) id E4ACA17170AE; Sat, 23 Aug 2014 18:09:53 +0100 (BST) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on auth-smtp-01.streamline.net Received: from [10.14.59.63] (unknown [212.183.132.78]) (Authenticated sender: studio@chrismackenziephotography.co.uk) by auth-smtp-01.streamline.net (Postfix) with ESMTP id 6C86B170DCF8 for ; Sat, 23 Aug 2014 17:44:36 +0100 (BST) From: Chris MacKenzie Content-Type: multipart/alternative; boundary=Apple-Mail-F3479B58-577E-4A47-9089-BD1602234231 Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (1.0) Subject: Re: Hadoop YARM Cluster Setup Questions Message-Id: <46A3DA43-2BCF-4D8A-BEC0-380E9A2B2BED@chrismackenziephotography.co.uk> Date: Sat, 23 Aug 2014 17:44:32 +0100 References: <53f8ad1d.888d320a.7b24.7f67@mx.google.com> In-Reply-To: <53f8ad1d.888d320a.7b24.7f67@mx.google.com> To: "user@hadoop.apache.org" X-Mailer: iPhone Mail (11D257) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail-F3479B58-577E-4A47-9089-BD1602234231 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Hi, The requirement is simply to have the slaves and masters files on the resour= ce manager it's used by the shell script that starts the demons :-) Sent from my iPhone > On 23 Aug 2014, at 16:02, "S.L" wrote: >=20 > Ok, Ill copy the slaves file to the other slave nodes as well. >=20 > What about the masters file though? >=20 > Sent from my HTC >=20 > ----- Reply message ----- > From: "rab ra" > To: "user@hadoop.apache.org" > Subject: Hadoop YARM Cluster Setup Questions > Date: Sat, Aug 23, 2014 5:03 AM >=20 > Hi, >=20 > 1. Typically,we used to copy the slaves file all the participating nodes > though I do not have concrete theory to back up this. Atleast, this is wha= t > I was doing in hadoop 1.2 and I am doing the same in hadoop 2x >=20 > 2. I think, you should investigate the yarn GUI and see how many maps it > has spanned. There is a high possibility that both the maps are running in= > the same node in parallel. Since there are two splits, there would be two > map processes, and one node is capable of handling more than one map. >=20 > 3. There could be no replica of input file stored and it is small, and > hence stored in a single block in one node itself. >=20 > These could be few hints which might help you >=20 > regards > rab >=20 >=20 >=20 > On Sat, Aug 23, 2014 at 12:26 PM, S.L wrote: >=20 > > Hi Folks, > > > > I was not able to find a clear answer to this , I know that on the mast= er > > node we need to have a slaves file listing all the slaves , but do we ne= ed > > to have the slave nodes have a master file listing the single name node(= I > > am not using a secondary name node). I only have the slaves file on the > > master node. > > > > I was not able to find a clear answer to this ,the reason I ask this is > > because when I submit a hadoop job , even though the input is being spli= t > > into 2 parts , only one data node is assigned applications , the other t= wo > > ( I have three) are no tbeing assigned any applications. > > > > Thanks in advance! > > --Apple-Mail-F3479B58-577E-4A47-9089-BD1602234231 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 7bit
Hi,

The requirement is simply to have the slaves and masters files on the resource manager it's used by the shell script that starts the demons :-)

Sent from my iPhone

On 23 Aug 2014, at 16:02, "S.L" <simpleliving016@gmail.com> wrote:

Ok, Ill copy the slaves file to the other slave nodes as well.

What about the masters file though?

Sent from my HTC

----- Reply message -----
From: "rab ra" <rabmdu@gmail.com>
To: "user@hadoop.apache.org" <user@hadoop.apache.org>
Subject: Hadoop YARM Cluster Setup Questions
Date: Sat, Aug 23, 2014 5:03 AM

Hi,

1. Typically,we used to copy the slaves file all the participating nodes
though I do not have concrete theory to back up this. Atleast, this is what
I was doing in hadoop 1.2 and I am doing the same in hadoop 2x

2. I think, you should investigate the yarn GUI and see how many maps it
has spanned. There is a high possibility that both the maps are running in
the same node in parallel. Since there are two splits, there would be two
map processes, and one node is capable of handling more than one map.

3. There could be no replica of input file stored and it is small, and
hence stored in a single block in one node itself.

These could be few hints which might help you

regards
rab



On Sat, Aug 23, 2014 at 12:26 PM, S.L <simpleliving016@gmail.com> wrote:

> Hi Folks,
>
> I was not able to find  a clear answer to this , I know that on the master
> node we need to have a slaves file listing all the slaves , but do we need
> to have the slave nodes have a master file listing the single name node( I
> am not using a secondary name node). I only have the slaves file on the
> master node.
>
> I was not able to find a clear answer to this ,the reason I ask this is
> because when I submit a hadoop job , even though the input is being split
> into 2 parts , only one data node is assigned applications , the other two
> ( I have three) are no tbeing assigned any applications.
>
> Thanks in advance!
>
--Apple-Mail-F3479B58-577E-4A47-9089-BD1602234231--