hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Groschupf ...@101tec.com>
Subject Re: Hadoop RPC call response post processing
Date Tue, 28 Dec 2010 19:59:36 GMT
Hi Ted, 
I don't think the problem is allocation but garbage collection. 
When the gc kicks in everything freezes. Of course changing the gc algorithm helps a little.
Stefan 



On Dec 27, 2010, at 11:21 PM, Ted Dunning wrote:

> I would be very surprised if allocation itself is the problem as opposed to
> good old fashioned excess copying.
> 
> It is very hard to write an allocator faster than the java generational gc,
> especially if you are talking about objects that are ephemeral.
> 
> Have you looked at the tenuring distribution?
> 
> On Mon, Dec 27, 2010 at 8:07 PM, Stefan Groschupf <sg@101tec.com> wrote:
> 
>> Hi All,
>> I'm browsing the RPC code since quite a while now trying to find any entry
>> point / interceptor slot that allows me to handle a RPC call response
>> writable after it was send over the wire.
>> Does anybody has an idea how break into the RPC code from outside. All the
>> interesting methods are private. :(
>> 
>> Background:
>> Heavy use of the RPC allocates hugh amount of Writable objects. We saw in
>> multiple systems  that the garbage collect can get so busy that the jvm
>> almost freezes for seconds. Things like zookeeper sessions time out in that
>> cases.
>> My idea is to create an object pool for writables. Borrowing an object from
>> the pool is simple since this happen in our custom code, though we do know
>> when the writable return was send over the wire and can be returned into the
>> pool.
>> A dirty hack would be to overwrite the write(out) method in the writable,
>> assuming that is the last thing done with the writable, though turns out
>> that this method is called in other cases too, e.g. to measure throughput.
>> 
>> Any ideas?
>> 
>> Thanks,
>> Stefan


Mime
View raw message