hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tapas Sarangi <tapas.sara...@gmail.com>
Subject Re: disk used percentage is not symmetric on datanodes (balancer)
Date Wed, 20 Mar 2013 15:31:50 GMT
Thanks for your reply. Some follow up questions below :

On Mar 20, 2013, at 5:35 AM, Алексей Бабутин <zorlaxpokemonych@gmail.com>
wrote:
> 
>  
> dfs.balance.bandwidthPerSec in hdfs-site.xml.I think balancer cant help you,because it
makes all the nodes equal.They can differ only on balancer threshold.Threshold =10 by default.It
means,that nodes can differ up to 350Tb between each other in 3.5Pb cluster.If Threshold =1
up to 35Tb and so on.

If we use multiple racks, let's assume we have 10 racks now and they are equally divided in
size (350 TB each). With a default threshold of 10, any two nodes on a given rack will have
a maximum difference of 35 TB, is this correct ? Also, does this mean the difference between
any two racks will also go down to 35 TB ?


> In ideal case with replication factor 2 ,with two nodes 12Tb and 72Tb you will be able
to have only 12Tb replication data.

Yes, this is true for exactly two nodes in the cluster with 12 TB and 72 TB, but not true
for more than two nodes in the cluster.

> 
> Best way,on my opinion,it is using multiple racks.Nodes in rack must be with identical
capacity.Racks must be identical capacity.
> For example:
> 
> rack1: 1 node with 72Tb
> rack2: 6 nodes with 12Tb
> rack3: 3 nodes with 24Tb
> 
> It helps with balancing,because dublicated  block must be another rack.
> 

The same question I asked earlier in this message, does multiple racks with default threshold
for the balancer minimizes the difference between racks ?

> Why did you select hdfs?May be lustre,cephfs and other is better choise.  

It wasn't my decision, and I probably can't change it now. I am new to this cluster and trying
to understand few issues. I will explore other options as you mentioned.



Mime
View raw message