hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: Hypertable claiming upto >900% random-read throughput vs HBase
Date Wed, 15 Dec 2010 19:11:59 GMT
Hi,

We dont have the test source code, so it isnt very objective.  However
I believe there are 2 things which help them:
- They are able to harness larger amounts of RAM, so they are really
just testing that vs HBase
- There have been substantial performance improvements in HBase since
the version they used to test with.  I'm talking like 5x speedups in
some scan cases.

With those two things I believe we blunt the difference substantially,
but without the source it is impossible to tell.

Finally, aside from the speed issues, there is the community and
hadoop integration aspects. Once you get past the faster speed you
might miss the great map reduce hookups, the diverse 3rd party
community around hbase, and the size/helpfulness of the community in
general.

Good luck with your evals,
-ryan

On Wed, Dec 15, 2010 at 11:00 AM, Gaurav Sharma
<gaurav.gs.sharma@gmail.com> wrote:
> Folks, my apologies if this has been discussed here before but can someone
> please shed some light on how Hypertable is claiming upto a 900% higher
> throughput on random reads and upto a 1000% on sequential reads in their
> performance evaluation vs HBase (modeled after the perf-eval test in section
> 7 of the Bigtable paper):
> http://www.hypertable.com/pub/perfeval/test1 (section: System Performance
> Difference)
>
> For one, I noticed they are running on CentOS 5.2 on 1.8Ghz dual-core
> Opterons / 10gigs of RAM. There's also no date of posting on the blogpost.
> It has been a while since I checked but YCSB did not have support for
> Hypertable testing. The numbers do seem a bit too good to be true :)
>
> -Gaurav
>

Mime
View raw message