hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Gray" <jl...@streamy.com>
Subject RE: Question about recommended heap sizes
Date Wed, 24 Sep 2008 23:19:15 GMT
Daniel,

I have seen similar issues during large scale imports.  For now, we have
gotten around the issue by increasing the regionserver heap size to 2GB.  My
slave machines also have 4GB of memory.

How many total regions did you have when you received the OOME?


Jonathan Gray

-----Original Message-----
From: Daniel Ploeg [mailto:dploeg@gmail.com] 
Sent: Wednesday, September 24, 2008 3:55 PM
To: hbase-user@hadoop.apache.org
Subject: Question about recommended heap sizes

Hi all,

I was running a test on our local hbase cluster (1 master node, 4 region
servers) and I ran into some OutOfMemory exceptions. Basically, one of the
region servers went down first, then the master node followed (ouch!) as I
was inserting the data for the test.

I was still using the default heap size and I would like to get some
recommendations as to what I should raise it to. My regionservers each have
4GB and the master node has 8GB. It may be useful if I describe the tests
that I was trying to do, so here goes:

The tests were to ramp up the amount of rows to determine the query latency
of my particular usage pattern. Each level of testing has a different number
of rows (1K, 10K and 100K). My exception occurred on the 10K row data
population (about 3300 rows in).

My data is a table with a single column family with 10K column instances per
row. Each column contains approx 500-1000 bytes of data.

I should note that the first level of testing with 1K rows were returning
average query responses of approx 240ms.

Could someone please advise on how large you think I should set my heap
space (and if you think I should make any mods to hadoop heap as well).

Thanks,
Daniel


Mime
View raw message