hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ruben Quintero <rfq_...@yahoo.com>
Subject Re: Hbase: GETs are very slow
Date Fri, 30 Apr 2010 17:44:05 GMT
We're running 20.3, and it has a 6 GB heap.

With block caching on, it seems we were running out of memory.  It would temporarily lose
a region server (usually when it attempted to split) and that caused a chain reaction when
it attempted to recover.  The heap would start to surge and cause a heavy garbage collection.
We would have nodes dropping in and out, and getting overloaded when they rejoined. We found
a post in a mailing list that recommended turning off block caching, and it ran well after
that.

As for swap, that was my first guess. How can I make sure it's not swapping, or is there a
way to see if it is?

Thanks,

- Ruben




________________________________
From: Jean-Daniel Cryans <jdcryans@apache.org>
To: hbase-user@hadoop.apache.org
Sent: Fri, April 30, 2010 12:27:37 PM
Subject: Re: Hbase: GETs are very slow

Which version? How much heap was given to HBase?

WRT block caching, I don't see how it could impact uploading in any
way, you should enable it. What was the problem inserting 1B rows
exactly? How were you running the upload?

Are you making sure there's no swap on the machines? That kills java
performance faster than you can say "hbase" ;)

J-D

On Fri, Apr 30, 2010 at 8:36 AM, Ruben Quintero <rfq_dev@yahoo.com> wrote:
> Hi,
>
> I have a hadoop/hbase cluster running on 9 machines (only 8 GB RAM, 1 TB drives), and
have recently noticed that Gets from Hbase have slowed down significantly. I'd say at this
point I'm not getting more than 100/sec when using the Hbase Java API. DFS-wise, there's plenty
of space left (using less than 10%), and all of the servers seem okay. The tables use LZO,
and have blockcache disabled (we were having problems inserting up to a billion rows with
it on, and read in the mailing list somewhere that it might help).
>
> The primary table has only 4 million rows at the moment. I created a new test table with
only 200,000, and it was running 100/sec as well.
>
> I'm not sure what the problem could be (paging?), or some configuration that can be adjusted?
>
> Any ideas? I can show our configuration if that's helpful, I just wasn't sure what info
would be helpful and what would be extraneous.
>
> Thanks,
>
> - Ruben
>
>
>
>



      
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message