hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bluemetrix Development <bmdevelopm...@gmail.com>
Subject Re: OOME Java heap space
Date Tue, 23 Feb 2010 18:40:59 GMT
Ok, this probably explains it.
I've been trying large sets of data, so I'm sure there are some HFiles
that are too big.
Data is all test data, so no worries about missing or corrupt data for now.

If this is the case tho, how big is too big? Or does it depend on my
disk/memory resources?
I'm currently using dynamic column qualifiers, so I could have been reaching
rows with 10s of millions of unique column qualifiers each.
Or, with other tables using timestamps as another dimension to the
data, and therefore
reaching 10s of millions of versions.
(I was trying to get HBase back up so I could count these numbers.)

What limits should I use for the time being for number of qualifiers
and number of timestamps/versions?
Or is this directly dependent on the amount of ram I have?
(ATM, I've only got older 2 CPU Xeons (w/ HT) and 8G ram. Three nodes
in total with one running master and RS.)

Thanks a million guys.


On Tue, Feb 23, 2010 at 12:59 PM, Stack <stack@duboce.net> wrote:
> Looks like you have a corrupt record -- a record that has had a bit
> flipped or so and you are trying to allocate memory to accomodate this
> oversized record --- or you managed to write something really big out
> to an hfile (the hfile doesn't look that big though).
>
> Try iterating over the file and see if you can identify which is the big record:
>
> ./bin/hbase org.apache.hadoop.hbase.io.hfile.HFile
>
> This gives you usage.
>
> You can get it to dump out values if you pass it a filename.
>
> Otherwise, move the file aside.  If its important to you, maybe we can
> figure a means of skipping over the bad record.
>
> St.Ack
>
>
> On Tue, Feb 23, 2010 at 9:21 AM, Bluemetrix Development
> <bmdevelopment@gmail.com> wrote:
>> Thanks.
>> I've tried heap at both 1G and 2G for both hadoop and hbase and got
>> the same results either way.
>> Heres the lsr
>>
>> http://pastebin.com/YXcsSFc4
>>
>>
>> On Tue, Feb 23, 2010 at 12:09 PM, Jean-Daniel Cryans
>> <jdcryans@apache.org> wrote:
>>> Please run a hdfs lsr on /hbase/UserData_0216/1765145465/ and pastebin
>>> the result.
>>>
>>> Also consider using a bigger heap size than 1GB (change that in hbase-env.sh).
>>>
>>> J-D
>>>
>>> On Tue, Feb 23, 2010 at 7:34 AM, Bluemetrix Development
>>> <bmdevelopment@gmail.com> wrote:
>>>> Hi,
>>>> When trying to restart HBase, I'm getting the following in the regionservers:
>>>> http://pastebin.com/GPw6yt2G
>>>>
>>>> and cannot get HBase fully restarted.
>>>> I'm on the latest version 0.20.3.
>>>> Where would I start digging to see what is causing this?
>>>> Thanks
>>>>
>>>
>>
>

Mime
View raw message