hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jurriaan Mous (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-13097) Netty PooledByteBufAllocator cause OOM in some unit test
Date Wed, 25 Feb 2015 15:34:04 GMT

    [ https://issues.apache.org/jira/browse/HBASE-13097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336635#comment-14336635

Jurriaan Mous commented on HBASE-13097:

I was certainly aware that having multiple connections is heavy. Thats why I cleaned up a
lot of Connections in the past in tests in HBASE-12796. It is recommended by Netty to recycle
EventLoopGroups for all bootstrap creation but configurations could differ between connections
so sharing them is not easy. Maybe we could detect the usage of the same config options and
recycle RpcClients? The best option seems to be to limit the amount of AsyncRpcClient and
thus Connection creation.

[~Apache9] Are you sure each bootstrap has its own PooledByteBufAllocator? Since the bootstrap
creation links to the default static PooledByteBufAllocator instance so it should be reused.
I think you meant an abundant EventLoopGroup creation which each has its own Thread pool.

        .option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)

> Netty PooledByteBufAllocator cause OOM in some unit test
> --------------------------------------------------------
>                 Key: HBASE-13097
>                 URL: https://issues.apache.org/jira/browse/HBASE-13097
>             Project: HBase
>          Issue Type: Bug
>          Components: IPC/RPC, test
>    Affects Versions: 2.0.0, 1.1.0
>            Reporter: zhangduo
> In some unit tests(such as TestAcidGuarantees) we create multiple Connection instance.
If we use AsyncRpcClient, then there will be multiple netty Bootstrap and every Bootstrap
has its own PooledByteBufAllocator.
> I haven't read the code clearly but it uses some threadlocal technics and jmap shows
io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry is the biggest things on Heap.
> See https://builds.apache.org/job/HBase-TRUNK/6168/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt
> {noformat}
> 2015-02-24 23:50:29,704 WARN  [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(167): Detected
pause in JVM or host machine (eg GC): pause of approximately 20133ms
> GC pool 'PS MarkSweep' had collection(s): count=15 time=55525ms
> {noformat}

This message was sent by Atlassian JIRA

View raw message