cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Stupp (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-11818) C* does neither recover nor trigger stability inspector on direct memory OOM
Date Sun, 22 May 2016 16:00:14 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15295611#comment-15295611
] 

Robert Stupp edited comment on CASSANDRA-11818 at 5/22/16 3:59 PM:
-------------------------------------------------------------------

This thing's getting weird. I tried a couple of different combinations:
# with unpooled direct buffers (change in {{CBUtil}}: {{allocator = new UnpooledByteBufAllocator(true)}})
against Java 1.8.0_92
# with unpooled heap buffers (change in {{CBUtil}}: {{allocator = new UnpooledByteBufAllocator(false)}})
against Java 1.8.0_92

Variant 1 showed the same behavior. Looks like GC causes long STW, causing responses to pile
up and then trying to allocate a totally huge amount of direct memory
Variant 2 showed a strange behavior with PS GC _and_ ParNew/CMS GC: Works fine until the old
gen becomes eligible for GC. And then ends with an effectively endless-GC-loop. (CMS basically
blocked the application according to GC log)

That led me to the idea to try it with Java 1.8.0_66 - and that works fine with variant 2.
Java 1.8.0_77 also works fine with variant 2. So it looks like Java 1.8.0_92 is somehow broken.
(Didn’t try variant 1)

Unfortunately, the default variant with pooled byte buffers does not work with 1.8.0_77. It
shows the same behavior as with u92.

So, the underlying problem remains: if GC runs wild (i.e. runs into some longer STW phases
- some hundred milliseconds in my test), requests pile up and cause a lot of allocations,
which in turn can cause OOMs. I have no idea why it runs into these many, consecutive, long
GCs. Just see the result: lots of allocated direct memory, that’s never freed. It then looks
like as in the following screenshot. At approx 16:03 a couple of ParNews with 200-500ms kicked
in but stopped after a while. At 16:06 some ParNews with 200-500ms STW occurred and lasted
until the load driver was killed. As a result all (allowed) direct memory is allocated (configured
with 2g max direct memory).

pooled direct mem usage w/ 1.8.0u77:
!11818-direct-mem.png|width=1000!

Further, using unpooled direct buffers work a bit better than pooled direct buffers: At least,
the unpooled buffers are released after a couple of minutes and C* is responsive again.

unpooled direct mem usage w/ 1.8.0u77:
!11818-direct-mem-unpooled.png|width=1000!

The buffer allocations that cause this problem are for messages with around 4MB or more. Could
be, that it is probably not a good idea to use a pre-thread arena for messages that are that
big.

/cc [~norman], do you have any idea?


was (Author: snazy):
This thing's getting weird. I tried a couple of different combinations:
# with unpooled direct buffers (change in {{CBUtil}}: {{allocator = new UnpooledByteBufAllocator(true)}})
against Java 1.8.0_92
# with unpooled heap buffers (change in {{CBUtil}}: {{allocator = new UnpooledByteBufAllocator(false)}})
against Java 1.8.0_92

Variant 1 showed the same behavior. Looks like GC causes long STW, causing responses to pile
up and then trying to allocate a totally huge amount of direct memory
Variant 2 showed a strange behavior with PS GC _and_ ParNew/CMS GC: Works fine until the old
gen becomes eligible for GC. And then ends with an effectively endless-GC-loop. (CMS basically
blocked the application according to GC log)

That led me to the idea to try it with Java 1.8.0_66 - and that works fine with variant 2.
Java 1.8.0_77 also works fine with variant 2. So it looks like Java 1.8.0_92 is somehow broken.
(Didn’t try variant 1)

Unfortunately, the default variant with pooled byte buffers does not work with 1.8.0_77. It
shows the same behavior as with u92.

So, the underlying problem remains: if GC runs wild (i.e. runs into some longer STW phases
- some hundred milliseconds in my test), requests pile up and cause a lot of allocations,
which in turn can cause OOMs. I have no idea why it runs into these many, consecutive, long
GCs. Just see the result: lots of allocated direct memory, that’s never freed. It then looks
like as in the following screenshot. At approx 16:03 a couple of ParNews with 200-500ms kicked
in but stopped after a while. At 16:06 some ParNews with 200-500ms STW occurred and lasted
until the load driver was killed. As a result all (allowed) direct memory is allocated (configured
with 2g max direct memory).

!11818-direct-mem.png|pooled direct mem usage w/ 1.8.0u77!

Further, using unpooled direct buffers work a bit better than pooled direct buffers: At least,
the unpooled buffers are released after a couple of minutes and C* is responsive again.

!11818-direct-mem-unpooled.png|unpooled direct mem usage w/ 1.8.0u77!

The buffer allocations that cause this problem are for messages with around 4MB or more. Could
be, that it is probably not a good idea to use a pre-thread arena for messages that are that
big.

/cc [~norman], do you have any idea?

> C* does neither recover nor trigger stability inspector on direct memory OOM
> ----------------------------------------------------------------------------
>
>                 Key: CASSANDRA-11818
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11818
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Robert Stupp
>         Attachments: 11818-direct-mem-unpooled.png, 11818-direct-mem.png, oom-histo-live.txt,
oom-stack.txt
>
>
> The following stack trace is not caught by {{JVMStabilityInspector}}.
> Situation was caused by a load test with a lot of parallel writes and reads against a
single node.
> {code}
> ERROR [SharedPool-Worker-1] 2016-05-17 18:38:44,187 Message.java:611 - Unexpected exception
during request; channel = [id: 0x1e02351b, L:/127.0.0.1:9042 - R:/127.0.0.1:51087]
> java.lang.OutOfMemoryError: Direct buffer memory
> 	at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_92]
> 	at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[na:1.8.0_92]
> 	at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[na:1.8.0_92]
> 	at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:672) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:234) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.PoolArena.allocate(PoolArena.java:218) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.PoolArena.allocate(PoolArena.java:138) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:270)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:177)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:168)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:105)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:349) ~[main/:na]
> 	at org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:314) ~[main/:na]
> 	at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:619)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:676)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:612)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at org.apache.cassandra.transport.Message$Dispatcher$Flusher.run(Message.java:445) ~[main/:na]
> 	at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:374) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92]
> {code}
> The situation does not get better when the load driver is stopped.
> I can reproduce this scenario at will. Managed to get histogram, stack traces and heap
dump. Already increased {{-XX:MaxDirectMemorySize}} to {{2g}}.
> A {{nodetool flush}} causes the daemon to exit (as that direct-memory OOM is caught by
{{JVMStabilityInspector}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message