hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gaurav Sharma <gaurav.gs.sha...@gmail.com>
Subject Re: Hypertable claiming upto >900% random-read throughput vs HBase
Date Wed, 15 Dec 2010 19:44:13 GMT
Thanks Ryan and Ted. I also think if they were using tcmalloc, it would have
given them a further advantage but as you said, not much is known about the
test source code.

On Wed, Dec 15, 2010 at 2:22 PM, Ryan Rawson <ryanobjc@gmail.com> wrote:

> So if that is the case, I'm not sure how that is a fair test.  One
> system reads from RAM, the other from disk.  The results as expected.
>
> Why not test one system with SSDs and the other without?
>
> It's really hard to get apples/oranges comparison. Even if you are
> doing the same workloads on 2 diverse systems, you are not testing the
> code quality, you are testing overall systems and other issues.
>
> As G1 GC improves, I expect our ability to use larger and larger heaps
> would blunt the advantage of a C++ program using malloc.
>
> -ryan
>
> On Wed, Dec 15, 2010 at 11:15 AM, Ted Dunning <tdunning@maprtech.com>
> wrote:
> > From the small comments I have heard, the RAM versus disk difference is
> > mostly what I have heard they were testing.
> >
> > On Wed, Dec 15, 2010 at 11:11 AM, Ryan Rawson <ryanobjc@gmail.com>
> wrote:
> >
> >> We dont have the test source code, so it isnt very objective.  However
> >> I believe there are 2 things which help them:
> >> - They are able to harness larger amounts of RAM, so they are really
> >> just testing that vs HBase
> >>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message