Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 13C66D61D for ; Sun, 16 Sep 2012 15:45:08 +0000 (UTC) Received: (qmail 5980 invoked by uid 500); 16 Sep 2012 15:45:03 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 5880 invoked by uid 500); 16 Sep 2012 15:45:02 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 5872 invoked by uid 99); 16 Sep 2012 15:45:02 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 16 Sep 2012 15:45:02 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of tristartom.tech@gmail.com designates 209.85.220.176 as permitted sender) Received: from [209.85.220.176] (HELO mail-vc0-f176.google.com) (209.85.220.176) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 16 Sep 2012 15:44:56 +0000 Received: by vcbfl11 with SMTP id fl11so8253270vcb.35 for ; Sun, 16 Sep 2012 08:44:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=/DmY71MpF9mdwgXPnRjZua9FzoW99AJpqAbS71mb44k=; b=B1r9Z9Hlfr78HaeVfmtWOkY0XBhryuz82/Sg0jD5h6b5U2+A3SKK3xGC3iQjG1ZRtx Bb9ivocHn68Echq7HbR4K1aLMiQn5yk1d0f+cj3CtGUxBUY1BY+hY+VpAqdbVBz8q63M 1nyflVQwcA/6C1jtQxJKpXoM/b3P6Y9rdbpcCRR++o/tVR/4MoahK96Tt0Bsa2wqWgrD WJbu3V6FXUnK2PvOi8UxP60DHnd1/7qoPn3cP6/sneT6X+7Ahm1Ed7Mu220MGBiNBV7i TYGiyXGowqPbIp0ppmK5sM7eElrXqWS/SgI/BPmw4hfGIlZ0jH+IOxDQR6eo3b9fYru+ 64pA== MIME-Version: 1.0 Received: by 10.52.72.193 with SMTP id f1mr2286928vdv.46.1347810275023; Sun, 16 Sep 2012 08:44:35 -0700 (PDT) Received: by 10.58.114.66 with HTTP; Sun, 16 Sep 2012 08:44:34 -0700 (PDT) In-Reply-To: References: Date: Sun, 16 Sep 2012 11:44:34 -0400 Message-ID: Subject: Re: how to specify the root directory of hadoop on slave node? From: Richard Tang To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=bcaec50160fbaadc1904c9d3886c --bcaec50160fbaadc1904c9d3886c Content-Type: text/plain; charset=ISO-8859-1 Hi, Hemanth, thanks for your responses. I have now structure my hdfs cluster to follow that norm. the two conditions are met. and no need to explicitly config home address for hdfs anymore. For the records, previously in my cluster, different nodes have hadoop installed on different directories and HADOOP_HOME can be used to config the home dir where hadoop is installed, (though the use of it is now deprecated. ) Regards, Richard On Wed, Sep 12, 2012 at 12:06 AM, Hemanth Yamijala < yhemanth@thoughtworks.com> wrote: > Hi Richard, > > If you have installed the hadoop software on the same locations on all > machines and if you have a common user on all the machines, then there > should be no explicit need to specify anything more on the slaves. > > Can you tell us whether the above two conditions are true ? If yes, some > more details on what is failing when you run start-dfs.sh will help. > > Thanks > Hemanth > > > On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang wrote: > >> Hi, All >> I need to setup a hadoop/hdfs cluster with one namenode on a machine and >> two datanodes on two other machines. But after setting datanode >> machiines in conf/slaves file, running bin/start-dfs.sh can not start >> hdfs normally.. >> I am aware that I have not specify the root directory hadoop is installed >> on slave nodes and the OS user account to use hadoop on slave node. >> I am asking how to specify where hadoop/hdfs is locally installed on >> slave node? Also how to specify the user account to start hdfs there? >> >> Regards, >> Richard >> > > --bcaec50160fbaadc1904c9d3886c Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi, Hemanth, thanks for your responses. I have now structure my hdfs cluste= r to follow that norm. the two conditions are met. and no need to explicitl= y config home address for hdfs anymore.

For the records, previously = in my cluster, different nodes have hadoop installed on different directori= es and HADOOP_HOME can be used to config the home dir where hadoop is insta= lled, (though the use of it is now deprecated. )

Regards,
Richard

On Wed, Sep 12, 2= 012 at 12:06 AM, Hemanth Yamijala <yhemanth@thoughtworks.com&g= t; wrote:
Hi Richard,

If you have i= nstalled the hadoop software on the same locations on all machines and if y= ou have a common user on all the machines, then there should be no explicit= need to specify anything more on the slaves.

Can you tell us whether the above two conditions are tr= ue ? If yes, some more details on what is failing when you run start-dfs.sh= will help.

Thanks
Hemanth


On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tristartom.tech@gm= ail.com> wrote:
Hi, All
I need to setup a hadoop/hdfs cluster with one namenode on a machine and two datanodes on two other = machines. But after setting datanode machiines in conf/slaves = file, running bin/start-dfs.sh can not start hdfs normally..
I am aware that I have not specify the root directory hadoop is installed o= n slave nodes and the OS user account to use hadoop on slave node.
I am = asking how to specify where hadoop/hdfs is locally installed o= n slave node? Also how to specify the user account to start hdfs
there?

Regards,
Richard


--bcaec50160fbaadc1904c9d3886c--