hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alan Chaney <a...@mechnicality.com>
Subject Re: HBase & BigTable + History: Can it run decently on a 512MB machine? What's the difference between the two?
Date Mon, 05 Mar 2012 20:23:09 GMT
On 3/5/2012 11:39 AM, D S wrote:
> On 3/5/12, Michael Drzal<mdrzal@gmail.com>  wrote:
>> Y
> Is HBase's configuration options robust enough that it could go back
> and run well on those 2003 specs by a bit of tweaking if that what was
> desired?

What do you mean "run well"? Run as well as Big Table would have done on 
the same machines? (Probably only someone who worked on B/T would be in 
a position to comment on that). Run without crashing? Run at XXX I/O 
operations per second?

Since 2003, roughly speaking at the same price point for a "commodity":

network I/O has increased by a factor of 10 - 100Mps was typical in such 
a m/c, now 1G is typical and 10G available.
disk I/O has increased by about 5 to 10 (3G SATA vs ATA-100, faster 
rotation and seek times)
disk price per GB has dropped by about a factor of 10
RAM performance has increased by a factor of somewhere between 5 and 10
CPU performance has increased for a typical "commodity" m/c from say 
1GHz single core to 2.5 to 3 G Quad or 8 core, so say 20-30x overall.

Add to that a lot of people on this list use virtualized instances and 
the equations get even more complicated and confusing.

Whats you point? Do you want to know how to set up a minimal HBase node 
which works on a 512M m/c? Purely for testing purposes I've run a V/M 
with only 750MB of RAM and it worked, but I wasn't pushing very much 
data through it.

Alan













Mime
View raw message