hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: reformat namenode
Date Mon, 07 Nov 2011 01:21:27 GMT
Keith,

This is cause your dfs.data.dir and dfs.name.dir are, by default, on
/tmp. When your /tmp is cleared by the OS (a regular thing people
forget to think of), your HDFS is essentially wiped away.

Configure dfs.name.dir and dfs.data.dir to be on a proper directory
that isn't cleaned up periodically and/or at boot, and you'll have a
proper HDFS across 'sessions'.

On Mon, Nov 7, 2011 at 2:45 AM, Keith Thompson <kthomps6@binghamton.edu> wrote:
> Hi,
>
> I am running Hadoop in pseudo-distributed mode on Linux.  For some reason,
> I have to reformat the namenode every time I start up Hadoop because it
> will fail whenever I try to connect to the HDFS.  After I reformat, it runs
> fine for that session; however, if I try to run it again later it will have
> the same issue.  There is probably some setting I forgot to set somewhere.
> Can anyone help?
>
> --
> *Keith Thompson*
> Graduate Research Associate
> SUNY Research Foundation
> Dept. of Systems Science and Industrial Engineering
> Binghamton University
>



-- 
Harsh J

Mime
View raw message