hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Todd Lipcon <t...@cloudera.com>
Subject Re: hbase vs bigtable
Date Sat, 28 Aug 2010 20:11:27 GMT
Depending on the workload, parallelism doesn't seem to matter much. On my
8-core Nehalem test cluster with 12 disks each, I'm always network bound far
before I'm CPU bound for most benchmarks. ie jstacks show threads mostly
waiting for IO to happen, not blocked on locks.

Is that not the case for your production boxes?

On Sat, Aug 28, 2010 at 1:07 PM, Ryan Rawson <ryanobjc@gmail.com> wrote:

> bigtable was written for 1 core machines, with ~ 100 regions per box.
> Thanks to CMS we generally can't run on < 4 cores, and at this point
> 16 core machines (with HTT) is becoming pretty standard.
>
> The question is, how do we leverage the ever-increasing sizes of
> machines and differentiate ourselves from bigtable?  What did google
> do (if anything) to adopt to the 16 core machines?  We should be able
> to do quite a bit on a 20 or 40 node cluster.
>
> more thread parallelism?
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message