hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From stack <st...@duboce.net>
Subject Re: Problems with write performance (25kb rows)
Date Wed, 13 Jan 2010 23:23:07 GMT
On Wed, Jan 13, 2010 at 4:35 AM, Dmitriy Lyfar <dlyfar@gmail.com> wrote:

> And ulimit is 32K for sure.


Yes, I see that in the log.



> Speed still the same (about 1K rows per second).
>

This seems low for your 6 node cluster.

If you look at the servers, are they cpu or io bound-up in any way?

How many clients you have running now?

This is not a new table right?  (I see there is an existing table in your
cluster looking at the regionserver log).   Its an existing table of many
regions?

You have upped the handlers in hbase.  Have you done same for datanodes (In
case we are bottlenecking here).



> Random ints plays a role of row keys now (i.e. uniform random distribution
> on (0, 100 * 1000)).
> What do you think is 5GB for hbase and 2GB for hdfs enough?
>
> Yes, that should be good.  Writing you are not using that memory in
regionserver though, maybe you should go with bigger regions if you have 25k
cells.  You using compression?

I took a look at your regionserver log.  Its just after an open of the
regionserver.  I see no activity other than the opening of a few regions.
 These regions do happen to have alot of store files so we're starting up
compactions but that all should be fine.  I'd be interested in seeing a log
snippet from a regionserver under load.

St.Ack

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message