hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Whiting <je...@qualtrics.com>
Subject Re: dfs.data.dir and "hadoop namenode -format"
Date Thu, 24 Jun 2010 15:20:25 GMT
1) You have to start and restart
2) Yes that is how it would get the new directory.  However changing the 
directory means it wont be able to find any of your old data.  If you 
don't want to start over from scratch you will want to stop dfs then 
copy the files over to the new data directory and then restart it.
3) Doing the namenode -format will clean up the directory on the name 
node and make everything good to go with no data in dfs.  However you'll 
have to go to the datanodes and clean out their data directories.   If 
you leave data in the directory on the datanodes they will be unable to 
join with the namenode.

This isn't the most technical explanation but hopefully it helps.
~Jeff

Sean Bigdatafun wrote:
> Can someone tell me what "hadoop namenode -format" does under the hood?
>
> I have started my HDFS cell with the following configuration.
> -------------------
>  dfs.data.dir
>     /opt/hadoop/data
> --------------------
>
> Overtime, I want to add another directory as the data.dir, how can I 
> achieve it? 
>
> 1) Can I simply edit "dfs.data.dir" in the hdfs-site.xml without 
> stopping my cell?
>
> 2) If 1) is not legitimate, can I run "stop-dfs.sh", then do 1) and 
> then "start-dfs.sh"?
>
> 3) My last question here is what "hadoop namenode -format" does. If I 
> run it on my Namenode, does it clean up the data.dir? and do I need to 
> manually clean up the data.dir on Datanode?
>
> Thanks,
> Sean
>
>
>
>  
>
>

-- 
Jeff Whiting
Qualtrics Senior Software Engineer
jeffw@qualtrics.com


Mime
View raw message