hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <apurt...@apache.org>
Subject Re: newbie question on disk usage on node with different disk size
Date Thu, 26 Nov 2009 21:55:26 GMT
The balancer moves blocks so they are evenly distributed across the cluster. It must be run
manually and won't do what you want once the datanode with the smallest disk fills.

Neither the HDFS balancer nor the config var you mention is part of HBase. You can run the
balancer with the Hadoop scripts and specify a change to that config var in Hadoop's hdfs-site.xml.

    - Andy

On Thu Nov 26th, 2009 1:21 PM PST Tux Racer wrote:

>Thanks Andrew for your answer.
>
>I may have found the HDFS parameter I was looking for:
>
>dfs.datanode.du.reserved 	0 	Reserved space in bytes per volume. Always 
>leave this much space free for non dfs use.
>
>
>http://hadoop.apache.org/common/docs/current/hdfs-default.html
>https://issues.apache.org/jira/browse/HADOOP-1463
>
>The book "Pro Hadoop" page 115, also mentions the "balancer" service and 
>the
>
>dfs.balance.bandwidthPerSec
>
>parameter also documented at:
>
>http://hadoop.apache.org/common/docs/current/hdfs-default.html
>
>dfs.balance.bandwidthPerSec 	1048576 	Specifies the maximum amount of 
>bandwidth that each datanode can utilize for the balancing purpose in 
>term of the number of bytes per second.
>
>
>
>however I do not see the script "start-blancer.sh" in hbase.
>
>Would it be possible to use those Hadoop parameters in a hbase setup?
>
>Thanks
>TR
>
>
>
>Andrew Purtell wrote:
>> Hi,
>>
>> Short answer: No.
>>
>> Longer answer: HBase uses the underlying filesystem (typically HDFS) to replicate
and persist data. This is independent of the key space. Any special block placement policy
like you want would be handled by the filesystem. To my knowledge, HDFS doesn't support it.
HDFS also does not like heterogeneous backing storage at the moment. It causes problems if
one node fills before the others, and there is not yet an automatic mechanism for moving blocks
from full nodes to less utilized ones, though I see there is an issue for that: http://issues.apache.org/jira/browse/HDFS-339
. I wouldn't recommend a setup like you propose. 
>>
>>    - Andy
>>
>>
>>
>> ________________________________
>> From: Tux Racer <tuxracer69@gmail.com>
>> To: hbase-user@hadoop.apache.org
>> Sent: Thu, November 26, 2009 11:14:15 AM
>> Subject: newbie question on disk usage on node with different disk size
>>
>> Hello Hbase Users!
>>
>> I am trying to find some pointers on how to configure hbase region server and in
particular how the disk will be filled on each node.
>>
>> Say for instance that I have a small cluster of 3 nodes:
>> node 1 has a 100Gb disk
>> node 2 has a 200Gb disk
>> and node 3 has a 300 Gb disk
>>
>> is there a way to tell hbase that it should store the keys proportionally to the
he node disk space?
>> (i.e. to have at some stage each disk filled at 50%: 50/100/150 Gb of space used)
>>
>> Or is that a pure Hadoop configuration question?
>> I looked at the files in the ~/hbase-0.20.1/conf/ folder with no luck.
>>
>> Thanks
>> TR
>>
>>
>>       
>>   
>



      

Mime
View raw message