hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Boyu Zhang" <boyuzhan...@gmail.com>
Subject RE: Error in Cluster Startup: NameNode is not formatted
Date Fri, 26 Jun 2009 21:33:36 GMT

Thanks a lot for your reply! I did formatted the namenode. But I got the
same error again. And actually I successfully run the example jar file once,
but after that one time, I couldn't get it run again. I clean the /tmp dir
every time before I format namenode again(I am just testing it, so I don't
worry about losing data:). Still, I got the same error when I execute the
bin/start-dfs.sh . I checked my conf, and I can't figure out why. Here is my
conf file:

I really appreciate if you could take a look at it. Thanks a lot.


  <description>Determines where on the local filesystem an DFS data node 
  should store its blocks.  If this is a comma-delimited 
  list of directories, then data will be stored in all named 
  directories, typically on different devices. 
  Directories that do not exist are ignored. 

  <description>The local directory where MapReduce stores intermediate 
  data files.  May be a comma-separated list of 
  directories on different devices in order to spread disk i/o. 
  Directories that do not exist are ignored. 

-----Original Message-----
From: Matt Massie [mailto:matt@cloudera.com] 
Sent: Friday, June 26, 2009 4:31 PM
To: core-user@hadoop.apache.org
Subject: Re: Error in Cluster Startup: NameNode is not formatted


You didn't do anything stupid.  I've forgotten to format a NameNode  
too myself.

If you check the QuickStart guide at
  you'll see that formatting the NameNode is the first of the  
Execution section (near the bottom of the page).

The command to format the NameNode is:

hadoop namenode -format

A warning though, you should only format your NameNode once.  Just  
like formatting any filesystem, you can loss data if you (re)format.

Good luck.


On Jun 26, 2009, at 1:25 PM, Boyu Zhang wrote:

> Hi all,
> I am a student and I am trying to install the Hadoop on a cluster, I  
> have
> one machine running namenode, one running jobtracker, two slaves.
> When I run the /bin/start-dfs.sh , there is something wrong with my
> namenode, it won't start. Here is the error message in the log file:
> ERROR org.apache.hadoop.fs.FSNamesystem: FSNamesystem initialization
> failed.
> java.io.IOException: NameNode is not formatted.
>        at
> org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:243)
>        at
> org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:80)
>        at
> org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:294)
>        at  
> org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:273)
>        at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:148)
>        at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:193)
>        at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:179)
>        at  
> org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
>        at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)
> I think it is something stupid i did, could somebody help me out?  
> Thanks a
> lot!
> Sincerely,
> Boyu Zhang

View raw message