hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sindhu Hosamane <sindh...@gmail.com>
Subject Re: How to make sure data blocks are shared between 2 datanodes
Date Mon, 26 May 2014 20:10:08 GMT
ok .thanks a lot that information . 
As i said i am running  2 datanodes on same machine . so my haddop home has 2 conf folders
.
conf and conf2  and in turn 2 hdfs-site.xml in both conf folders .
I guess dfs.replication value in hdfs-site.xml of conf folder should be 3 .
What should i have it in conf2  ? should it be 1 there ?

sorry if question sounds stupid . But i am unfamiliar with these kind of settings ( 2 datanodes
on same machine ..so having 2 conf )


 If data is split across multiple datanodes , then processing capacity would be improved -
( thats what i guess ) since my file is only 240 KB , it occupies only one block . It cannot
use second block and remain in another datanode . 
So now , does it make sense to reduce the block size so that blocks are split between 2 datanodes
—if i want to take very much advantage of multiple datanodes .

Any advices ? Your help would be appreciated .


Best Regards,
Sindhu


Mime
View raw message