hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <apurt...@apache.org>
Subject Re: newbie question on disk usage on node with different disk size
Date Thu, 26 Nov 2009 19:40:58 GMT
Hi,

Short answer: No.

Longer answer: HBase uses the underlying filesystem (typically HDFS) to replicate and persist
data. This is independent of the key space. Any special block placement policy like you want
would be handled by the filesystem. To my knowledge, HDFS doesn't support it. HDFS also does
not like heterogeneous backing storage at the moment. It causes problems if one node fills
before the others, and there is not yet an automatic mechanism for moving blocks from full
nodes to less utilized ones, though I see there is an issue for that: http://issues.apache.org/jira/browse/HDFS-339
. I wouldn't recommend a setup like you propose. 

   - Andy



________________________________
From: Tux Racer <tuxracer69@gmail.com>
To: hbase-user@hadoop.apache.org
Sent: Thu, November 26, 2009 11:14:15 AM
Subject: newbie question on disk usage on node with different disk size

Hello Hbase Users!

I am trying to find some pointers on how to configure hbase region server and in particular
how the disk will be filled on each node.

Say for instance that I have a small cluster of 3 nodes:
node 1 has a 100Gb disk
node 2 has a 200Gb disk
and node 3 has a 300 Gb disk

is there a way to tell hbase that it should store the keys proportionally to the he node disk
space?
(i.e. to have at some stage each disk filled at 50%: 50/100/150 Gb of space used)

Or is that a pure Hadoop configuration question?
I looked at the files in the ~/hbase-0.20.1/conf/ folder with no luck.

Thanks
TR


      
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message