Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B700CDA8D for ; Tue, 28 Aug 2012 09:14:15 +0000 (UTC) Received: (qmail 55110 invoked by uid 500); 28 Aug 2012 09:06:10 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 54924 invoked by uid 500); 28 Aug 2012 09:06:10 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 54553 invoked by uid 99); 28 Aug 2012 09:06:09 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Aug 2012 09:06:09 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FSL_RCVD_USER,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of hadoop90@gmail.com designates 209.85.216.176 as permitted sender) Received: from [209.85.216.176] (HELO mail-qc0-f176.google.com) (209.85.216.176) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Aug 2012 09:06:05 +0000 Received: by qcsc21 with SMTP id c21so3850479qcs.35 for ; Tue, 28 Aug 2012 02:05:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=XhLahmWTZ2XTV5JhzYpqKahB71Ymo4sSGGyw3F/YK1A=; b=RhFYJJ8W2tCQdSaQbxq54nG//lLSRxOwjAmXD5wc9brXSbzLAJC3win2P9RVDthxuZ FLEsz1bpggWu/kqRD+JqwZ6X4lmQUDCPzOdhPbXXORQEZ3RMuLx5sQ4rZmInwXc2a9dx F4ARmitOgHsjgJptKlg3Nnuz041trKFSY+bH8KsK2YtcNK3brRljUNaWPG0hEpMkMkaS LKX/424q+wGPP0Oe33YWFR/iDmhAnBtVv+vobxkDW5NsU01i0F/j7FIku6ehlIlmcz60 VsM2d+oRCmxFTrz24xva6jPJWSfiD1BgTl+mfkcyEvbef+oz+FDLGandGCWr0UiEN2xf XMpw== MIME-Version: 1.0 Received: by 10.229.137.85 with SMTP id v21mr8201628qct.17.1346144744324; Tue, 28 Aug 2012 02:05:44 -0700 (PDT) Received: by 10.49.127.100 with HTTP; Tue, 28 Aug 2012 02:05:44 -0700 (PDT) In-Reply-To: References: Date: Tue, 28 Aug 2012 17:05:44 +0800 Message-ID: Subject: Re: Why cannot I start namenode or localhost:50070 ? From: Charles AI To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=00235452e9604d409804c84fbf25 X-Virus-Checked: Checked by ClamAV on apache.org --00235452e9604d409804c84fbf25 Content-Type: text/plain; charset=ISO-8859-1 hi Mohammad, Thank you for reminding. I have checked the two directories, and set them as /home/hadoopfs/data and /home/hadoopfs/name, not under /tmp. So far, my problem has already been solved. Thank you. On Mon, Aug 27, 2012 at 4:31 PM, Mohammad Tariq wrote: > Hello Charles, > > Have you added dfs.name. dir and dfs.data. dir props in your > hdfs-site.xml file??Values of these props default to the /tmp dir, so at > each restart both data and meta info is lost. > > > On Monday, August 27, 2012, Charles AI wrote: > > thank you guys. > > the logs say my dfs.name.dir is not consistent: > > Directory /home/hadoop/hadoopfs/name is in an inconsistent state: > storage directory does not exist or is not accessible. > > And the namenode starts after "hadoop namenode format". > > > > > > On Mon, Aug 27, 2012 at 3:16 PM, Harsh J wrote: > >> > >> Charles, > >> > >> Can you check your NN logs to see if it is properly up? > >> > >> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI > wrote: > >> > Hi All, > >> > I was running a cluster of one master and 4 slaves. I copied the > >> > hadoop_install folder from the master to all 4 slaves, and configured > them > >> > well. > >> > How ever when i sh start-all.sh from the master machine. It shows > below: > >> > > >> > starting namenode, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out > >> > slave2: ssh: connect to host slave2 port 22: Connection refused > >> > master: starting datanode, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out > >> > slave4: starting datanode, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out > >> > slave3: starting datanode, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out > >> > slave1: starting datanode, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out > >> > master: starting secondarynamenode, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out > >> > starting jobtracker, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out > >> > slave2: ssh: connect to host slave2 port 22: Connection refused > >> > slave4: starting tasktracker, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out > >> > master: starting tasktracker, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out > >> > slave3: starting tasktracker, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out > >> > slave1: starting tasktracker, logging to > >> > > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out > >> > > >> > I know that slave2 is not on. But that should not be the problem. > After this > >> > , I typed 'jps' in the master's shell, and it shows: > >> > 6907 Jps > >> > 6306 DataNode > >> > 6838 TaskTracker > >> > 6612 JobTracker > >> > 6533 SecondaryNameNode > >> > > >> > And when I opened this link "localhost:50030",the page said : > >> > master Hadoop Map/Reduce Administration > >> > Quick Links > >> > State: INITIALIZING > >> > Started: Mon Aug 27 14:54:46 CST 2012 > >> > Version: 0.20.2, r911707 > >> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo > >> > Identifier: 201208271454 > >> > > >> > I don't quite get what the "State : INITIALIZING" means. > Additionally, i > >> > cannot open "localhost:50070". > >> > > >> > So, Any suggestions ? > >> > > >> > Thanks in advance. > >> > CH > >> > -- > >> > in a hadoop learning cycle > >> > >> > >> > >> -- > >> Harsh J > > > > > > > > -- > > in a hadoop learning cycle > > > > -- > Regards, > Mohammad Tariq > > -- in a hadoop learning cycle --00235452e9604d409804c84fbf25 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable hi Mohammad,
Thank you for reminding.
I have checked the two di= rectories, and set them as /home/hadoopfs/data and /home/hadoopfs/name, not= under /tmp.
So far, my problem has already been solved. Thank yo= u.

On Mon, Aug 27, 2012 at 4:31 PM, Mohammad Ta= riq <dontariq@gmail.com> wrote:
Hello Charles,

=A0 =A0Have you added dfs.name. dir and dfs.data. dir props in your hdfs-site= .xml file??Values of these props default to the /tmp dir, so at each restar= t both data and meta info is lost.


On Monday, August 27, 2012, Charles AI <hadoop90@gmail.com> wrote:
> t= hank you guys.
> the logs say my dfs.name.dir is not consistent:
> Directory /home/hadoop/hadoopfs/name is in an inconsistent state: stor= age directory does not exist or is not accessible.
> And the namenode starts after "hadoop namenode format".
&= gt;
>
> On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <harsh@cloudera.com> wr= ote:
>>
>> Charles,
>>
>> Can you check your NN logs to see= if it is properly up?
>>
>> On Mon, Aug 27, 2012 at 12:3= 3 PM, Charles AI <hadoop90@gmail.com> wrote:
>> > Hi All,
>> > I was running a cluster of one maste= r and 4 slaves. I copied the
>> > hadoop_install folder from th= e master to all 4 slaves, and configured them
>> > well.
>> > How ever when i sh start-all.sh from the master machine. It s= hows below:
>> >
>> > starting namenode, logging to=
>> > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-nameno= de-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refus= ed
>> > master: starting datanode, logging to
>> > = /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.ou= t
>> > slave4: starting datanode, logging to
>> > /usr/l= ocal/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
>&= gt; > slave3: starting datanode, logging to
>> > /usr/local/= hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
>> > slave1: starting datanode, logging to
>> > /usr/l= ocal/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
&= gt;> > master: starting secondarynamenode, logging to
>> >= ; /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west= -desktop.out
>> > starting jobtracker, logging to
>> > /usr/local/h= adoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
>&= gt; > slave2: ssh: connect to host slave2 port 22: Connection refused >> > slave4: starting tasktracker, logging to
>> > /us= r/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out>> > master: starting tasktracker, logging to
>> > /u= sr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.o= ut
>> > slave3: starting tasktracker, logging to
>> > /us= r/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.= out
>> > slave1: starting tasktracker, logging to
>> &= gt; /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-des= ktop.out
>> >
>> > I know that slave2 is not on. But that shoul= d not be the problem. After this
>> > , I typed 'jps' i= n the master's shell, and it shows:
>> > 6907 Jps
>&g= t; > 6306 DataNode
>> > 6838 TaskTracker
>> > 6612 JobTracker
>>= > 6533 SecondaryNameNode
>> >
>> > And when I o= pened this link "localhost:50030",the page said :
>> >= ; master Hadoop Map/Reduce Administration
>> > Quick Links
>> > State: INITIALIZING
>> = > Started: Mon Aug 27 14:54:46 CST 2012
>> > Version: 0.20.2= , r911707
>> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisd= o
>> > Identifier: 201208271454
>> >
>> > I = don't quite get what the "State : INITIALIZING" means. Additi= onally, i
>> > cannot open "localhost:50070".
>> >
>> > So, Any suggestions ?
>> >
&g= t;> > Thanks in advance.
>> > CH
>> > --
&= gt;> > in a hadoop learning cycle
>>
>>
>>=
>> --
>> Harsh J
>
>
>
> --
> = in a hadoop learning cycle
>

--
Regards,
=A0=A0 =A0Mohammad Tariq



-- in a hadoop learning cycle
--00235452e9604d409804c84fbf25--