hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eason.Lee" <leongf...@gmail.com>
Subject Re: Namenode problem
Date Tue, 09 Mar 2010 03:59:28 GMT
It's usually in $HADOOP_HOME/logs

2010/3/9 William Kang <weliam.cloud@gmail.com>

> Hi,
> If the namenode is not up, how can I get the logdir?
>
>
> William
>
> On Mon, Mar 8, 2010 at 10:39 PM, Eason.Lee <leongfans@gmail.com> wrote:
>
> > 2010/3/9 William Kang <weliam.cloud@gmail.com>
> >
> > > Hi Eason,
> > > Thanks a lot for your reply. But I do have another folder which in not
> > > inside /tmp. I did not use default settings.
> > >
> >
> > you'd better post your configuration in detail~~
> >
> >
> > > To make it clear, I will describe what happened:
> > > 1. hadoop namenode -format
> > > 2. start-all.sh
> > > 3. running fine, http://localhost:50070 is accessible
> > > 4. stop-all.sh
> > > 5. start-all.sh, http://localhost:50070 is NOT accessible
> > > Unless I format the namenode, the HDFS master
> > > http://localhost:50070/dfshealth.jsp is not accessible.
> > >
> >
> > Try "jps" to see if the namenode is up~~
> > If the namenode is not up, maybe there is some error log in logdir, try
> to
> > post the error~~
> >
> >
> > > So, I have to redo step 1, 2 again to gain access to
> > > http://localhost:50070/dfshealth.jsp. But all data would be lost after
> > > format.
> > >
> >
> > format will delete the old namespace, so everything will lost~~
> >
> >
> > >
> > >
> > > William
> > >
> > > On Mon, Mar 8, 2010 at 1:02 AM, Eason.Lee <leongfans@gmail.com> wrote:
> > >
> > > > 2010/3/8 William Kang <weliam.cloud@gmail.com>
> > > >
> > > > > Hi guys,
> > > > > Thanks for your replies. I did not put anything in /tmp. It's just
> > that
> > > > >
> > > >
> > > > default setting of dfs.name.dir/dfs.data.dir is set to the subdir in
> > /tmp
> > > >
> > > > every time when I restart the hadoop, the localhost:50070 does not
> show
> > > up.
> > > > > The localhost:50030 is fine. Unless I reformat namenode, I wont be
> > able
> > > > to
> > > > > see the HDFS' web page at 50070. It did not clean /tmp
> automatically.
> > > But
> > > > >
> > > >
> > > > It's not you clean the /tmp dir. Some operation clean it
> > automatically~~
> > > >
> > > >
> > > > > after format, everything is gone, well, it is a format. I did not
> > > really
> > > > > see
> > > > > anything in log. Not sure what caused it.
> > > > >
> > > > >
> > > > > William
> > > > >
> > > > >
> > > > > On Mon, Mar 8, 2010 at 12:39 AM, Bradford Stephens <
> > > > > bradfordstephens@gmail.com> wrote:
> > > > >
> > > > > > Yeah. Don't put things in /tmp. That's unpleasant in the long
> run.
> > > > > >
> > > > > > On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <leongfans@gmail.com>
> > > wrote:
> > > > > > > Your /tmp directory is cleaned automaticly?
> > > > > > >
> > > > > > > Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
> > > > > > >
> > > > > > > 2010/3/8 William Kang <weliam.cloud@gmail.com>
> > > > > > >
> > > > > > >> Hi all,
> > > > > > >> I am running HDFS in Pseudo-distributed mode. Every
time after
> I
> > > > > > restarted
> > > > > > >> the machine, I have to format the namenode otherwise
the
> > > > > localhost:50070
> > > > > > >> wont show up. It is quite annoying to do so since all
the data
> > > would
> > > > > be
> > > > > > >> lost. Does anybody know this happens? And how should
I fix
> this
> > > > > problem?
> > > > > > >> Many thanks.
> > > > > > >>
> > > > > > >>
> > > > > > >> William
> > > > > > >>
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > http://www.drawntoscalehq.com --  The intuitive, cloud-scale
> data
> > > > > > solution. Process, store, query, search, and serve all your
data.
> > > > > >
> > > > > > http://www.roadtofailure.com -- The Fringes of Scalability,
> Social
> > > > > > Media, and Computer Science
> > > > > >
> > > > >
> > > >
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message