cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeff Jirsa (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-13137) nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a request is being processed
Date Tue, 18 Jul 2017 17:19:00 GMT


Jeff Jirsa commented on CASSANDRA-13137:

I've been bit by this, personally, at a past employer, so I know exactly how frustrating it
is to try to have tooling shut down cassandra only to find out it doesn't shut down cleanly.

Changing out the thrift server in a 2.2 tree that's 2 years old ( {{2.2.0}} was tagged July
20 2015) seems like a really invasive change for an annoying bug that doesn't actually impact
running servers.

However, looking at the changelog for disruptor:
, it looks like it's only really a few changes:
[avoid interest changes when processing is still in progress|],
a [junit fix|]
and [this patch|]
, so perhaps it's not that scary.

[~brandon.williams] - you tend to be as conservative as I am. How do you feel about bumping
thrift server in 2.2 ? 

> nodetool disablethrift deadlocks if THsHaDisruptorServer is stopped while a request is
being processed
> ------------------------------------------------------------------------------------------------------
>                 Key: CASSANDRA-13137
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: 2.2.9
>            Reporter: Sotirios Delimanolis
> We are using Thrift with {{rpc_server_type}} set to {{hsha}}. This creates a {{THsHaDisruptorServer}}
which is a subclass of [{{TDisruptorServer}}|].
> Internally, this spawns {{number_of_cores}} number of selector threads. Each gets a {{RingBuffer}}
and {{rpc_max_threads / cores}} number of worker threads (the {{RPC-Thread}} threads). As
the server starts receiving requests, each selector thread adds events to its {{RingBuffer}}
and the worker threads process them. 
> The _events_ are [{{Message}}|]
instances, which have preallocated buffers for eventual IO.
> When the thrift server starts up, the corresponding {{ThriftServerThread}} joins on the
selector threads, waiting for them to die. It then iterates through all the {{SelectorThread}}
objects and calls their {{shutdown}} method which attempts to drain their corresponding {{RingBuffer}}.
The [drain ({{drainAndHalt}})|]
works by letting the worker pool "consumer" threads catch up to the "producer" index, ie.
the selector thread.
> When we execute a {{nodetool disablethrift}}, it attempts to {{stop}} the {{THsHaDisruptorServer}}.
That works by setting a {{stopped}} flag to {{true}}. When the selector threads see that,
they break from their {{select()}} loop, and clean up their resources, ie. the {{Message}}
objects they've created and their buffers. *However*, if one of those {{Message}} objects
is currently being used by a worker pool thread to process a request, if it calls [this piece
of code|],
you'll get the following {{NullPointerException}}
> {noformat}
> Jan 18, 2017 6:28:50 PM com.lmax.disruptor.FatalExceptionHandler handleEventException
> SEVERE: Exception processing: 633124 com.thinkaurelius.thrift.Message$Invocation@25c9fbeb
> java.lang.NullPointerException
>         at com.thinkaurelius.thrift.Message.getInputTransport(
>         at com.thinkaurelius.thrift.Message.invoke(
>         at com.thinkaurelius.thrift.Message$Invocation.execute(
>         at com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(
>         at com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(
>         at
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>         at java.util.concurrent.ThreadPoolExecutor$
>         at
> {noformat}
> That fails because it tries to dereference one of the {{Message}} "cleaned up", ie. {{null}},
> Because that call is outside the {{try}} block, the exception escapes and basically kills
the worker pool thread. This has the side effect of "discarding" one of the consumers of a
selector's {{RingBuffer}}. 
> *That* has the side effect of preventing the {{ThriftServerThread}} from draining the
{{RingBuffer}} (and dying) since the consumers never catch up to the stopped producer. And
that finally has the effect of preventing the {{nodetool disablethrift}} from proceeding since
it's trying to {{join}} the {{ThriftServerThread}}. Deadlock!
> The {{ThriftServerThread}} thread looks like
> {noformat}
> "Thread-1" #2234 prio=5 os_prio=0 tid=0x00007f4ae6ff1000 nid=0x2eb6 runnable [0x00007f4729174000]
>    java.lang.Thread.State: RUNNABLE
>         at java.lang.Thread.yield(Native Method)
>         at com.lmax.disruptor.WorkerPool.drainAndHalt(
>         at com.thinkaurelius.thrift.TDisruptorServer$SelectorThread.shutdown(
>         at com.thinkaurelius.thrift.TDisruptorServer.gracefullyShutdownInvokerPool(
>         at com.thinkaurelius.thrift.TDisruptorServer.waitForShutdown(
>         at org.apache.thrift.server.AbstractNonblockingServer.serve(
>         at org.apache.cassandra.thrift.ThriftServer$
> {noformat}
> The {{nodetool disablethrift}} thread looks like
> {noformat}
> "RMI TCP Connection(18183)-" #12121 daemon prio=5 os_prio=0 tid=0x00007f4ac2c61000
nid=0x5805 in Object.wait() [0x00007f4aab7ec000]
>    java.lang.Thread.State: WAITING (on object monitor)
>         at java.lang.Object.wait(Native Method)
>         at java.lang.Thread.join(
>         - locked <0x000000055d3cb010> (a org.apache.cassandra.thrift.ThriftServer$ThriftServerThread)
>         at java.lang.Thread.join(
>         at org.apache.cassandra.thrift.ThriftServer.stop(
>         - locked <0x000000055bffb5e0> (a org.apache.cassandra.thrift.ThriftServer)
>         at org.apache.cassandra.service.StorageService.stopRPCServer(
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>         at java.lang.reflect.Method.invoke(
>         at sun.reflect.misc.Trampoline.invoke(
>         at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>         at java.lang.reflect.Method.invoke(
>         at sun.reflect.misc.MethodUtil.invoke(
>         at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(
>         at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(
>         at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(
>         at com.sun.jmx.mbeanserver.PerInterface.invoke(
>         at com.sun.jmx.mbeanserver.MBeanSupport.invoke(
>         at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(
>         at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(
>         at
>         at$300(
>         at$
>         at
>         at
>         at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>         at java.lang.reflect.Method.invoke(
>         at sun.rmi.server.UnicastServerRef.dispatch(
>         at sun.rmi.transport.Transport$
>         at sun.rmi.transport.Transport$
>         at Method)
>         at sun.rmi.transport.Transport.serviceCall(
>         at sun.rmi.transport.tcp.TCPTransport.handleMessages(
>         at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(
>         at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(
>         at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$$Lambda$1/
>         at Method)
>         at sun.rmi.transport.tcp.TCPTransport$
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>         at java.util.concurrent.ThreadPoolExecutor$
>         at
> {noformat}
> Most of the code involved isn't part of Cassandra source, but it's an external dependency
that should be fixed.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message