hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Billy" <sa...@pearsonwholesale.com>
Subject Re: Question about HDFS allocations
Date Mon, 31 Dec 2007 22:06:08 GMT
There is also a script added but its not in a release yet its in trunk

start-balancer.sh

its in the bin folder

this is from the source code
* To start:
* bin/start-balancer.sh [-threshold <threshold>]
* Example: bin/ start-balancer.sh
* start the balancer with a default threshold of 10%
* bin/ start-balancer.sh -threshold 5
* start the balancer with a threshold of 5%
* To stop:
* bin/ stop-balancer.sh

Billy

"Bryan Duxbury" <bryan@rapleaf.com> wrote in 
message news:DF4AA7D7-ACD4-42B6-B9E3-7F3779CEF5DE@rapleaf.com...
> We've been doing some testing with HBase, and one of the problems we  ran 
> into was that our machines are not homogenous in terms of disk  capacity. 
> A few of our machines only have 80gb drives, where the rest  have 250s. As 
> such, as the equal distribution of blocks went on,  these smaller machines 
> filled up first, completely overloading the  drives, and came to a 
> crashing halt. Since one of these machines was  also the namenode, it 
> broke the rest of the cluster.
>
> What I'm wondering is if there should be a way to tell HDFS to only  use 
> something like 80% of available disk space before considering a  machine 
> full. Would this be a useful feature, or should we approach  the problem 
> from another angle, like using a separate HDFS data  partition?
>
> -Bryan
> 




Mime
View raw message