hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: mslab enabled jvm crash
Date Thu, 26 May 2011 17:41:04 GMT
On Thu, May 26, 2011 at 6:08 AM, Wayne <wav100@gmail.com> wrote:
> I think our problem is the load pattern. Since we use a very controlled q
> based method to do work our Python code is relentless in terms of keeping
> the pressure up. In our testing we will Q up 500k messages with 10k writes
> per message that all get written to 3 column families (primary plus 2
> secondary indexes).

Whats the insert batch size do you think?

> This means there will be 15 billion writes waiting to go
> into hbase. It will take us days to load this and the JVM will eventually
> crumble. If we are lucky enough to avoid too long of a GC or along with it
> we also see OOM problems crop up eventually.

What version of hbase?  If your batches are large and they get backed
up, they could be hanging out in queues in hbase that will hold number
of handlers * 100 requests (See HBASE-3813 mitigated in 0.90.3).

Bulk load is out of the question getting you off the ground because
you have these secondary indices?


> Again our hardware is taking it easy during this process except for the JVM
> and its 8g heap. The heat should be on the disk and it is not, as we are not
> really pushing it at all. I could and would like to throttle it up and have
> 6 writers per node instead of 4 but I know the nodes can not sustain that.
> What we need to find is what level of writes can our cluster sustain for
> weeks at a time and still be fast with reads and not go AWOL. Once we find
> that sweat spot then we can try to turn up the heat...but we never seem to
> find it. Between GC pauses and OOMs we have never run under load for long
> enough to gain "confidence".

Yes. We have a bunch of work to do still (I'll spare you the lecture
on this being an open-source project, blah, blah, volunteers...).

St.Ack

Mime
View raw message