hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: Hypertable claiming upto >900% random-read throughput vs HBase
Date Wed, 15 Dec 2010 19:22:32 GMT
So if that is the case, I'm not sure how that is a fair test.  One
system reads from RAM, the other from disk.  The results as expected.

Why not test one system with SSDs and the other without?

It's really hard to get apples/oranges comparison. Even if you are
doing the same workloads on 2 diverse systems, you are not testing the
code quality, you are testing overall systems and other issues.

As G1 GC improves, I expect our ability to use larger and larger heaps
would blunt the advantage of a C++ program using malloc.


On Wed, Dec 15, 2010 at 11:15 AM, Ted Dunning <tdunning@maprtech.com> wrote:
> From the small comments I have heard, the RAM versus disk difference is
> mostly what I have heard they were testing.
> On Wed, Dec 15, 2010 at 11:11 AM, Ryan Rawson <ryanobjc@gmail.com> wrote:
>> We dont have the test source code, so it isnt very objective.  However
>> I believe there are 2 things which help them:
>> - They are able to harness larger amounts of RAM, so they are really
>> just testing that vs HBase

View raw message