hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Parag Dhanuka <parag.dhan...@gmail.com>
Subject Problem copying data to hadoop
Date Tue, 24 Feb 2009 07:02:33 GMT
I have setup hadoop in pseudo distributed mode with namenode, datanode,
jobtracker and tasktracker all on the same machine...
I also have a code which I use to write my data into hadoop. The code of my
mine reads data from the local disk does some preprocessing and after that
uses (multiple) FSDataOutputStream to write data to hadoop. I have multiple
FSDataOutputStreams open at one time cause I want to write data into
different files based on some logic I have.

Now the problem... While the process was writing data to hadoop I got this
error Problem renewing lease for DFSClient_1637324984. On going to name node
logs I found this
2009-02-23 10:02:57,181 FATAL
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Fatal Error : All
storage directories are inaccessible.

I have absolutely no idea as to what might have caused this. Can some one
please help.

Parag Dhanuka

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message