cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Stupp (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-11818) C* does neither recover nor trigger stability inspector on direct memory OOM
Date Sun, 29 May 2016 18:57:12 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15306030#comment-15306030
] 

Robert Stupp edited comment on CASSANDRA-11818 at 5/29/16 6:57 PM:
-------------------------------------------------------------------

I've tried [~norman]'s patch against Netty 4.1 against trunk. With the patch enabled (requires
{{-Dio.netty.noDirectBufferNoCleaner=false}}) my overloaded node recovers nicely. The CMS-GC-storm
caused by {{Bits.reserveMemory()}} does not occur and the node remains responsive. However,
while the node is in an overload situation, it spews a lot of errors. Unfortunately these
are {{java.lang.OutOfMemoryError: No more memory available}}), which is generally fine, but
in this case it just indicates that there is not enough direct memory to fulfill the *current*
request. IMO, passing this OOM to {{JMVStabilityInspector}} would be wrong, since it is a
recoverable error. (Background: Netty has a separate, distinct direct memory pool then, which
does not affect other operations or memory pools.)

I've also applied the same technique to C* internal direct memory allocations. (We already
use {{FileUtils.clean()}} to cleanup direct buffers.)

To summarize, {{Bits.reserveMemory}} + {{Cleaner}} are the root cause IMO. Not having both
reduce client latency as a side effect.

EDIT: removed client-latency numbers. Was not from 6 to <1ms - but from .8ms to .1ms (99.9percentile).


was (Author: snazy):
I've tried [~norman]'s patch against Netty 4.1 against trunk. With the patch enabled (requires
{{-Dio.netty.noDirectBufferNoCleaner=false}}) my overloaded node recovers nicely. The CMS-GC-storm
caused by {{Bits.reserveMemory()}} does not occur and the node remains responsive. However,
while the node is in an overload situation, it spews a lot of errors. Unfortunately these
are {{java.lang.OutOfMemoryError: No more memory available}}), which is generally fine, but
in this case it just indicates that there is not enough direct memory to fulfill the *current*
request. IMO, passing this OOM to {{JMVStabilityInspector}} would be wrong, since it is a
recoverable error. (Background: Netty has a separate, distinct direct memory pool then, which
does not affect other operations or memory pools.)

I've also applied the same technique to C* internal direct memory allocations. (We already
use {{FileUtils.clean()}} to cleanup direct buffers.)

To summarize, {{Bits.reserveMemory}} + {{Cleaner}} are the root cause IMO. Not having both
reduce client latency as a side effect (from 6ms to <1ms in my test).

> C* does neither recover nor trigger stability inspector on direct memory OOM
> ----------------------------------------------------------------------------
>
>                 Key: CASSANDRA-11818
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11818
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Robert Stupp
>         Attachments: 11818-direct-mem-unpooled.png, 11818-direct-mem.png, oom-histo-live.txt,
oom-stack.txt
>
>
> The following stack trace is not caught by {{JVMStabilityInspector}}.
> Situation was caused by a load test with a lot of parallel writes and reads against a
single node.
> {code}
> ERROR [SharedPool-Worker-1] 2016-05-17 18:38:44,187 Message.java:611 - Unexpected exception
during request; channel = [id: 0x1e02351b, L:/127.0.0.1:9042 - R:/127.0.0.1:51087]
> java.lang.OutOfMemoryError: Direct buffer memory
> 	at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_92]
> 	at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[na:1.8.0_92]
> 	at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[na:1.8.0_92]
> 	at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:672) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:234) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.PoolArena.allocate(PoolArena.java:218) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.PoolArena.allocate(PoolArena.java:138) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:270)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:177)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:168)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:105)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:349) ~[main/:na]
> 	at org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:314) ~[main/:na]
> 	at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:619)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:676)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:612)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at org.apache.cassandra.transport.Message$Dispatcher$Flusher.run(Message.java:445) ~[main/:na]
> 	at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:374) ~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
~[netty-all-4.0.36.Final.jar:4.0.36.Final]
> 	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92]
> {code}
> The situation does not get better when the load driver is stopped.
> I can reproduce this scenario at will. Managed to get histogram, stack traces and heap
dump. Already increased {{-XX:MaxDirectMemorySize}} to {{2g}}.
> A {{nodetool flush}} causes the daemon to exit (as that direct-memory OOM is caught by
{{JVMStabilityInspector}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message