hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tianyin Xu <t...@cs.ucsd.edu>
Subject Why I cannot delete all the nameNode metadata?
Date Wed, 08 Oct 2014 03:56:56 GMT
Hi,

I wanna run some experiments on Hadoop which requires a clean, initial
system state of HDFS for every job execution, i.e., the HDFS should be
formatted and contain nothing.

I keep *dfs.datanode.data.dir* and *dfs.namenode.name.dir* the default,
which are located in /tmp

Every time before running a job,

1. I first delete  dfs.datanode.data.dir and dfs.namenode.name.dir
#rm -Rf /tmp/hadoop-tianyin*

2. Then I format the nameNode
#bin/hdfs namenode -format

3. Start HDFS
sbin/start-dfs.sh

4. However, I still find the previous metadata (e.g., the directory I
previously created) in HDFS, for example,
#bin/hdfs dfs -mkdir /user
mkdir: `/user': File exists

Could anyone tell me what I missed or misunderstood? Why I can still see
the old data after both physically delete the directories and reformat the
HDFS nameNode?

Thanks a lot for your help!
Tianyin

Mime
View raw message