hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Juhani Connolly <juha...@gmail.com>
Subject Re: 0.92 and Read/writes not scaling
Date Mon, 26 Mar 2012 17:02:30 GMT
On Tue, Mar 27, 2012 at 1:29 AM, Stack <stack@duboce.net> wrote:
> On Mon, Mar 19, 2012 at 3:41 AM, Juhani Connolly <juhanic@gmail.com> wrote:
>> Hi,
>>
>> We're running into a brick wall where our throughput numbers will not
>> scale as we increase server counts both using custom inhouse tests and
>> ycsb.
>>
>
> Does the above statement still hold?  We've moved past the above and
> we are now on to 'writes are slow'?
>
>> We're using hbase 0.92 on hadoop 0.20.2(we also experience the same
>> issues using 0.90 before switching our testing to  this version).
>>
>> Our cluster consists of:
>> - Namenode and hmaster on separate servers, 24 core, 64gb
>> - up to 11 datanode/regionservers. 24 core, 64gb, 4 * 1tb disks(hope
>> to get this changed)
>>
>
> You can put the master and namenode on the same machine.
>
> Yes, more disks are better (see the GBIF blog cited in another thread).
>
>
>
>> - load 10m rows
>
> Are the 10m rows for sure spread across all regions?
>
>
>> Delaying WAL flushes gives a small throughput bump but it doesn't
>> scale.
>>
>
> Why does it not scale?
>
> St.Ack

This was on our old setup, things weren't scaling because there
weren't enough regions. I had originally meant to make the other
thread because the problem was fundamentally different, sorry for the
confusion.

In summary the problem now is no longer "not scaling"(because as we
increase regions to match the available cpu's it seemingly does, just
the base numbers are miserable). Instead it is now "since switching to
hdfs 0.23 reads are  good and scaling but writes are miserably
slow(approx 2000 per region)"

Mime
View raw message