hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: load balancing considerations
Date Wed, 11 Aug 2010 06:01:46 GMT
Use a tool like Yourkit to grovel that heap, the open source tools are
not really there yet.

But your stack trace tells a lot.... the fatal allocation is in the
RPC layer.  Either a client is sending a massive value, or you have a
semi-hostile network client sending bytes to your open socket which
are being interpreted as the buffer size to allocate.  If you look at
the actual RPC code (any RPC code really) there is often a 'length'
field which is then used to allocate a dynamic buffer.

-ryan

On Tue, Aug 10, 2010 at 10:55 PM, Ted Yu <yuzhihong@gmail.com> wrote:
> The compressed file is still big:
> -rw-r--r-- 1 hadoop users  809768340 Aug 11 05:49 java_pid26972.hprof.gz
>
> If you can tell me specific things to look for in the dump, I would collect
> it (through jhat) and publish.
>
> Thanks
>
> On Tue, Aug 10, 2010 at 10:29 PM, Stack <stack@duboce.net> wrote:
>
>> On Tue, Aug 10, 2010 at 9:52 PM, Ted Yu <yuzhihong@gmail.com> wrote:
>> > Here are GC-related parameters:
>> > /usr/java/jdk1.6/bin/java -Xmx4000m -XX:+HeapDumpOnOutOfMemoryError
>> > -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
>> >
>>
>> You have > 2 CPUs per machine I take it?  You could probably drop the
>> conservative XX:+CMSIncrementalMode.
>>
>> > The heap dump is big:
>> > -rw------- 1 hadoop users 4146551927 Aug 11 03:59 java_pid26972.hprof
>> >
>> > Do you have ftp server where I can upload it ?
>> >
>>
>> Not really.  I was hoping you could put a compressed version under an
>> http server somewhere that I could pull from.  You might as well
>> include the GC log while you are at it.
>>
>> Thanks Ted,
>>
>> St.Ack
>>
>

Mime
View raw message