cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Brown (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-8457) nio MessagingService
Date Tue, 14 Feb 2017 03:06:42 GMT


Jason Brown commented on CASSANDRA-8457:

One other thing [~aweisberg] and I discussed was large messages and how they would be handled.
In the current 8457 implementation, we will simply allocate a buffer of the {{serializedSize}}.
If a message is supposed to be 50Mb, we'll allocate that and roll on. With enough large messages,
sent to enough peers, we could OOM or get into some serious memory pressure problems. For
comparison, the existing {{OutboundTcpConnection}} uses a {{BufferedOutputStream}}, which
is defaulted to 64k, which we constantly reuse and never need to realloc.

Thus, I propose to bring back the {{SwappingByteBufDataOutputStreamPlus}} that I had in an
earlier commit. To recap, the basic idea is provide a {{DataOutputPlus}} that has a backing
{{ByteBuffer}} that is written to, and when it is filled, it is written to the netty context
and flushed, then allocate a new buffer for more writes - kinda similar to a {{BufferedOutputStream}},
but replacing the backing buffer when full. Bringing this idea back is also what underpins
one of the major performance things I wanted to address: buffering up smaller messages into
one buffer to avoid going back to the netty allocator for every tiny buffer we might need
- think Mutation acks.

We definitely need to address the large buffer issue, and I wouldn't mind knocking out the
"buffering small messages" as it's really the same code (that I've written before). wdyt [~slebresne]
and [~aweisberg]?

> nio MessagingService
> --------------------
>                 Key: CASSANDRA-8457
>                 URL:
>             Project: Cassandra
>          Issue Type: New Feature
>            Reporter: Jonathan Ellis
>            Assignee: Jason Brown
>            Priority: Minor
>              Labels: netty, performance
>             Fix For: 4.x
> Thread-per-peer (actually two each incoming and outbound) is a big contributor to context
switching, especially for larger clusters.  Let's look at switching to nio, possibly via Netty.

This message was sent by Atlassian JIRA

View raw message