hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Azuryy Yu <azury...@gmail.com>
Subject Re: Why I cannot delete all the nameNode metadata?
Date Wed, 08 Oct 2014 04:06:52 GMT
To make sure your dfs.namenode.name.dir is by default.
then, how did you find /user exists? hdfs dfs -ls ? or you checked
dfs.datanode.data.dir?
 if the latter, then don't worry.


On Wed, Oct 8, 2014 at 11:56 AM, Tianyin Xu <tixu@cs.ucsd.edu> wrote:

> Hi,
>
> I wanna run some experiments on Hadoop which requires a clean, initial
> system state of HDFS for every job execution, i.e., the HDFS should be
> formatted and contain nothing.
>
> I keep *dfs.datanode.data.dir* and *dfs.namenode.name.dir* the default,
> which are located in /tmp
>
> Every time before running a job,
>
> 1. I first delete  dfs.datanode.data.dir and dfs.namenode.name.dir
> #rm -Rf /tmp/hadoop-tianyin*
>
> 2. Then I format the nameNode
> #bin/hdfs namenode -format
>
> 3. Start HDFS
> sbin/start-dfs.sh
>
> 4. However, I still find the previous metadata (e.g., the directory I
> previously created) in HDFS, for example,
> #bin/hdfs dfs -mkdir /user
> mkdir: `/user': File exists
>
> Could anyone tell me what I missed or misunderstood? Why I can still see
> the old data after both physically delete the directories and reformat the
> HDFS nameNode?
>
> Thanks a lot for your help!
> Tianyin
>

Mime
View raw message