cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christian Esken (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-13265) Epxiration in OutboundTcpConnection can block the reader Thread
Date Thu, 02 Mar 2017 09:20:45 GMT


Christian Esken commented on CASSANDRA-13265:

bq. How often does this issue occur?

Not very often, but it happens. I assume that special scenarios trigger this:
- High write-throughput  (especially when you have write spikes it is easy to get above 1024
- A long Stop-the-World GC phase (because then even more Threads could start to write and
iterate the Queue)
- Temporary network overload to the target node (because nothing is taken from the Queue in
that case).
- Many non-droppable entries in the Queue (because then the loop does not bail out:  "if (!
qm.droppable)  continue;" )

Usually temporary overloads resolve themselves, but in this case it does not. As soon as the
Queue has reached a certain size limit, most time is spent in iterating the Queue, and the
reader is starved (1 reader Thread fights against 324 Threads that do a read-lock by calling

> Epxiration in OutboundTcpConnection can block the reader Thread
> ---------------------------------------------------------------
>                 Key: CASSANDRA-13265
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 1.8.0_112-b15)
> Linux 3.16
>            Reporter: Christian Esken
>            Assignee: Christian Esken
>         Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz,
> I observed that sometimes a single node in a Cassandra cluster fails to communicate to
the other nodes. This can happen at any time, during peak load or low load. Restarting that
single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the situation and am
already developing a possible fix. Here is the analysis so far:
> - A Threaddump in this situation showed  324 Threads in the OutboundTcpConnection class
that want to lock the backlog queue for doing expiration.
> - A class histogram shows 262508 instances of OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain amount
of queued messages, it starts thrashing itself to death. Each of the Thread fully locks the
Queue for reading and writing by calling, making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually writing
to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do, and fully lock
the Queue
> This means: Writing blocks the Queue for reading, and readers might even be starved which
makes the situation even worse.
> -----
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (100000 INSERT statements per second and more during peak times).

This message was sent by Atlassian JIRA

View raw message