Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D591196A7 for ; Wed, 30 Nov 2011 03:52:35 +0000 (UTC) Received: (qmail 73804 invoked by uid 500); 30 Nov 2011 03:52:32 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 73764 invoked by uid 500); 30 Nov 2011 03:52:32 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 73749 invoked by uid 99); 30 Nov 2011 03:52:30 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Nov 2011 03:52:30 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL X-Spam-Check-By: apache.org Received-SPF: softfail (nike.apache.org: transitioning domain of nitin.khandelwal@germinait.com does not designate 209.85.210.176 as permitted sender) Received: from [209.85.210.176] (HELO mail-iy0-f176.google.com) (209.85.210.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Nov 2011 03:52:22 +0000 Received: by iagk10 with SMTP id k10so126047iag.35 for ; Tue, 29 Nov 2011 19:52:01 -0800 (PST) MIME-Version: 1.0 Received: by 10.50.149.165 with SMTP id ub5mr259788igb.23.1322625120843; Tue, 29 Nov 2011 19:52:00 -0800 (PST) Received: by 10.231.19.68 with HTTP; Tue, 29 Nov 2011 19:52:00 -0800 (PST) In-Reply-To: References: <2011112920371787979833@gmail.com> Date: Wed, 30 Nov 2011 09:22:00 +0530 Message-ID: Subject: Re: Re: [help]how to stop HDFS From: Nitin Khandelwal To: common-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=e89a8f3baff17fb66804b2eba873 X-Virus-Checked: Checked by ClamAV on apache.org --e89a8f3baff17fb66804b2eba873 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable Hi, Even i am facing the same problem. There may be some issue with script . The doc says to start namenode type : bin/hdfs namenode start But "start" is not recognized. There is a hack to start namenode with command "bin/hdfs namenode &" , but no idea how to stop. If it had been a issue with config , the later also should not have worked. Thanks, Nitin 2011/11/30 cat fa > In fact it's me to say sorry. I used the word "install" which was > misleading. > > In fact I downloaded a tar file and extracted it to =A3=AFusr=A3=AFbin=A3= =AFhadoop > > Could you please tell me where to point those variables=A3=BF > > 2011/11/30, Prashant Sharma : > > I am sorry, I had no idea you have done a rpm install, my suggestion wa= s > > based on the assumption that you have done a tar extract install where > all > > three distribution have to extracted and then export variables. > > Also I have no experience with rpm based installs - so no comments abou= t > > what went wrong in your case. > > > > Basically from the error i can say that it is not able to find the jars > > needed on classpath which is referred by scripts through > > HADOOP_COMMON_HOME. I would say check with the access permission as in > > which user was it installed with and which user is it running with ? > > > > On Tue, Nov 29, 2011 at 10:48 PM, cat fa >wrote: > > > >> Thank you for your help, but I'm still a little confused. > >> Suppose I installed hadoop in /usr/bin/hadoop/ .Should I > >> point HADOOP_COMMON_HOME to /usr/bin/hadoop ? Where should I > >> point HADOOP_HDFS_HOME? Also to /usr/bin/hadoop/ ? > >> > >> 2011/11/30 Prashant Sharma > >> > >> > I mean, you have to export the variables > >> > > >> > export HADOOP_CONF_DIR=3D/path/to/your/configdirectory. > >> > > >> > also export HADOOP_HDFS_HOME ,HADOOP_COMMON_HOME. before your run yo= ur > >> > command. I suppose this should fix the problem. > >> > -P > >> > > >> > On Tue, Nov 29, 2011 at 6:23 PM, cat fa > >> > wrote: > >> > > >> > > it didn't work. It gave me the Usage information. > >> > > > >> > > 2011/11/29 hailong.yang1115 > >> > > > >> > > > Try $HADOOP_PREFIX_HOME/bin/hdfs namenode stop --config > >> > $HADOOP_CONF_DIR > >> > > > and $HADOOP_PREFIX_HOME/bin/hdfs datanode stop --config > >> > $HADOOP_CONF_DIR. > >> > > > It would stop namenode and datanode separately. > >> > > > The HADOOP_CONF_DIR is the directory where you store your > >> configuration > >> > > > files. > >> > > > Hailong > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > *********************************************** > >> > > > * Hailong Yang, PhD. Candidate > >> > > > * Sino-German Joint Software Institute, > >> > > > * School of Computer Science&Engineering, Beihang University > >> > > > * Phone: (86-010)82315908 > >> > > > * Email: hailong.yang1115@gmail.com > >> > > > * Address: G413, New Main Building in Beihang University, > >> > > > * No.37 XueYuan Road,HaiDian District, > >> > > > * Beijing,P.R.China,100191 > >> > > > *********************************************** > >> > > > > >> > > > From: cat fa > >> > > > Date: 2011-11-29 20:22 > >> > > > To: common-user > >> > > > Subject: Re: [help]how to stop HDFS > >> > > > use $HADOOP_CONF or $HADOOP_CONF_DIR ? I'm using hadoop 0.23. > >> > > > > >> > > > you mean which class? the class of hadoop or of java? > >> > > > > >> > > > 2011/11/29 Prashant Sharma > >> > > > > >> > > > > Try making $HADOOP_CONF point to right classpath including you= r > >> > > > > configuration folder. > >> > > > > > >> > > > > > >> > > > > On Tue, Nov 29, 2011 at 3:58 PM, cat fa < > >> boost.subscribing@gmail.com > >> > > > >> > > > > wrote: > >> > > > > > >> > > > > > I used the command : > >> > > > > > > >> > > > > > $HADOOP_PREFIX_HOME/bin/hdfs start namenode --config > >> > $HADOOP_CONF_DIR > >> > > > > > > >> > > > > > to sart HDFS. > >> > > > > > > >> > > > > > This command is in Hadoop document (here > >> > > > > > < > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site= /ClusterSetup.html > >> > > > > > >) > >> > > > > > > >> > > > > > However, I got errors as > >> > > > > > > >> > > > > > Exception in thread "main" > java.lang.NoClassDefFoundError:start > >> > > > > > > >> > > > > > Could anyone tell me how to start and stop HDFS? > >> > > > > > > >> > > > > > By the way, how to set Gmail so that it doesn't top post my > >> reply? > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > > > --=20 Nitin Khandelwal --e89a8f3baff17fb66804b2eba873--