hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hideki KAJIWARA <hideki.kajiw...@connecty.co.jp>
Subject redundant configuration of the namenode
Date Thu, 24 Jul 2008 07:58:17 GMT
It thinks about the redundant configuration of the name node that used 
keepalived (active/standby). (Refer to attached file redundant.gif and 
configuration file hadoop-site.xml. )

The NFS mount does the directory "/dfs/name" on the standby machine to "
/dfs/namerep" set to dfs.name.dir in the configuration file. 
As a result, it thinks about making of file system metadata tedious. 

It became a result like attached file result.gif, and the file was not 
complete though the 
failover had been executed while writing the data of 100MB by using the 
sample program. 


Hereafter, reproduction procedure. 

1.The sample program is executed. 
java sample.HadoopWriteToDFS 104857600 /dfs/users/kajiwara/test.data

2.It is kill as for namenode process of the active machine. 
(Then, start-dfs.sh starts on the standby machine. )

After a while, the file is not complete as the above-mentioned though 
the command ends normally. 

I want you to teach if there is a method of solving above-mentioned 
issue. 

H.KAJIWARA
hideki.kajiwara@connecty.co.jp

Mime
View raw message