hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Konstantin Shvachko <...@yahoo-inc.com>
Subject Re: Changing my NameNode
Date Thu, 03 Jul 2008 00:04:23 GMT
Here is the trick.
1. Set dfs.name.dir to <ServerA/storageDirA, ServerB/storageDirB>
2. Start name-node on ServerB.
3. Set dfs.name.dir to <ServerB/storageDirB>.

On step 2 the name-node will reproduce current image, in both directories storageDirA and
if storageDirA is mounted and is accessible from server B of course.
You probably want to remove storageDirA from the config variable after that.

Another approach is to start SecondaryNamenode on ServerB
with fs.checkpoint.dir set to <ServerB/storageDirB>
while the main name-node is still running on ServerA.
When the the checkpoint is created in <ServerB/storageDirB> you can stop both servers
and then start name-node on ServerB with dfs.name.dir set to <ServerB/storageDirB>.
This will work on the latest version 0.18 of hadoop though.


Jimmy Wan wrote:
> Anyone out there with a suggestion?
> Is my best option to export data out of the cluster, reformat a new namenode,
> and reimport it all?
> On Tue, 10 Jun 2008, Jimmy Wan wrote:
>> I've got an HDFS cluster with 2 boxes in it.
>> Server A is serving as the NameNode and also as a DataNode.
>> Server B is a DataNode.
>> After successfully decommissioning the HDFS storage of Server A using a
>> dfs.hosts.exclude file and dfadmin -refreshNodes:
>> Server A is serving as the NameNode.
>> Server B is serving as a DataNode.
>> How do I change my NameNode to be Server B?
>> Can I simply change the hadoop-site.xml, masters, and slaves files for all
>> machines in my cluster? I could have sworn that I tried that and it failed.
>> P.S. This link is wrong wrt how to decommission a Dattanode.
>> http://hadoop.apache.org/core/docs/r0.16.4/hdfs_design.html#DFSAdmin

View raw message