hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From MOHAMMED IRFANULLA S <m.irfanu...@huawei.com>
Subject RE: Problem formatting namenode
Date Wed, 17 Mar 2010 03:50:48 GMT
 


Hi, Sagar

Thanks for your reply.
I'm starting the hadoop as user root and the directory /opt has full[777]
permissions recursively. But still, the same problem occurs. Any specific
reason for the problem.

Thanks and regards,
Md. Irfanulla S.




-----Original Message-----
From: Sagar Shukla [mailto:sagar_shukla@persistent.co.in] 
Sent: Sunday, March 14, 2010 8:22 PM
To: hdfs-user@hadoop.apache.org; m.irfanulla@huawei.com
Subject: RE: Problem formatting namenode

Hi,
    Hadoop could be starting hadoop by default. So please check if user
hadoop has write permission on directory /opt where data directory /opt/hdfs
is getting created.

Thanks,
Sagar
________________________________________
From: MOHAMMED IRFANULLA S [m.irfanulla@huawei.com]
Sent: Saturday, March 13, 2010 1:50 PM
To: hdfs-user@hadoop.apache.org
Subject: Problem formatting namenode

I'm facing issues while formatting the namenode.
I've given hadoop.tmp.dir as /opt/hdfs.
Below is the log output.

linux-5e47:/usr/local/hadoop # ./bin/hadoop namenode -format
10/03/13 15:52:09 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = linux-5e47/162.2.11.16
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
************************************************************/
10/03/13 15:52:09 INFO namenode.FSNamesystem: fsOwner=root,root
10/03/13 15:52:09 INFO namenode.FSNamesystem: supergroup=supergroup
10/03/13 15:52:09 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/03/13 15:52:09 INFO common.Storage: Image file of size 94 saved in 0
seconds.
10/03/13 15:52:09 ERROR namenode.NameNode: java.io.IOException: No space
left on device
        at sun.nio.ch.FileDispatcher.pwrite0(Native Method)
        at sun.nio.ch.FileDispatcher.pwrite(FileDispatcher.java:45)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:100)
        at sun.nio.ch.IOUtil.write(IOUtil.java:60)
        at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:648)
        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.pre
allocate(FSEditLog.java:228)
        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.flu
shAndSync(FSEditLog.java:204)
        at
org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutp
utStream.java:89)
        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.cre
ate(FSEditLog.java:161)
        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog.createEditLogFile(FSEditLog
.java:342)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1093)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1110)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:856)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:948)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

10/03/13 15:52:09 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at linux-5e47/162.2.11.16
************************************************************/


<mailto:hdfs-user@hadoop.apache.org>

df -h gives me this:

linux-5e47:/usr/local/hadoop # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             101G  618M  100G   1% /
udev                  3.9G  160K  3.9G   1% /dev
/dev/sda5             101G   34M  100G   1% /home
/dev/sda7             8.6T  990M  8.6T   1% /opt
/dev/sda6             101G  641M  100G   1% /tmp
/dev/sda3             101G  2.7G   98G   3% /usr
/dev/sda4             101G  118M  100G   1% /var


As evident from the output, there is more than enough space available on the
device. If someother drive is specified for hadoop.tmp.dir(lower than 1T),
this issue is not happening. So, what could be the real reason behind the
above problem.

Really appreciate help on this.

Thanks and regards,

Md. Irfanulla S.




DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the
property of Persistent Systems Ltd. It is intended only for the use of the
individual or entity to which it is addressed. If you are not the intended
recipient, you are not authorized to read, retain, copy, print, distribute
or use this message. If you have received this communication in error,
please notify the sender and delete all copies of this message. Persistent
Systems Ltd. does not accept any liability for virus infected mails.


Mime
View raw message