hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mithila Nagendra <mnage...@asu.edu>
Subject Re: Problem copying data to hadoop
Date Tue, 24 Feb 2009 07:19:35 GMT
Hey Parag
Check if the namenode and the datanode are up and running. Use the 'jps'
command to do so. If they are not running U ll have to do a stop-all and
reformat the namenode using hadoop namenode -format (make sure u have no
data on the HDFS). Then restart hadoop using start-all.sh.

If U have the datanode and the namenode running, then check the log files
for errors.

Mithila

On Tue, Feb 24, 2009 at 10:02 AM, Parag Dhanuka <parag.dhanuka@gmail.com>wrote:

> I have setup hadoop in pseudo distributed mode with namenode, datanode,
> jobtracker and tasktracker all on the same machine...
> I also have a code which I use to write my data into hadoop. The code of my
> mine reads data from the local disk does some preprocessing and after that
> uses (multiple) FSDataOutputStream to write data to hadoop. I have multiple
> FSDataOutputStreams open at one time cause I want to write data into
> different files based on some logic I have.
>
> Now the problem... While the process was writing data to hadoop I got this
> error Problem renewing lease for DFSClient_1637324984. On going to name
> node
> logs I found this
> 2009-02-23 10:02:57,181 FATAL
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Fatal Error : All
> storage directories are inaccessible.
>
> I have absolutely no idea as to what might have caused this. Can some one
> please help.
>
> --
> Parag Dhanuka
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message