cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benedict (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-8670) Large columns + NIO memory pooling causes excessive direct memory usage
Date Tue, 31 Mar 2015 13:26:54 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14388515#comment-14388515
] 

Benedict commented on CASSANDRA-8670:
-------------------------------------

I've pushed one more round of changes [here|https://github.com/belliottsmith/cassandra/tree/8670-2],
after your follow up round (which I mention for posterity). I've made the following changes;
let me know your thoughts on them:

* Merged writeUTF into one method, with a fast path _only_ for ASCII characters, since this
is likely to benefit most from unrolling, and the instruction cache pollution effect is small.
The two separate but near identical and very large methods look almost certain to be worse
due to icache misses than a single branch that is mostly predicted correctly, especially when
we had multiple branches inside the loop, which were each more likely to be mispredicted.
As a follow-up commit, in case you're worried by this, I've introduced a no-conditional version
of sizeOfChar (which we may be able to optimise further), but I haven't performed any benchmarks
to measure the difference in effect.
* Reverted the new hollowBuffer approach for array backed buffers - I couldn't see a reason
for not just directly invoking the write(byte[]) methods?
* Based SafeMemoryWriter on DataOutputBuffer
* Shared the UBDOSP.utfBytes and DOSP.WBC.buf in the same ThreadLocal 
* Preferred bb.hasArray() to bb.isDirect(), since it is a concrete method, so can be inlined
* Moved writeUTFLegacy into the test case, since it's only for test purposes now
* Fixed formatting in UnbufferedDataOutputStreamPlus (seems a good opportunity to standardise
it)

> Large columns + NIO memory pooling causes excessive direct memory usage
> -----------------------------------------------------------------------
>
>                 Key: CASSANDRA-8670
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8670
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Ariel Weisberg
>            Assignee: Ariel Weisberg
>             Fix For: 3.0
>
>         Attachments: largecolumn_test.py
>
>
> If you provide a large byte array to NIO and ask it to populate the byte array from a
socket it will allocate a thread local byte buffer that is the size of the requested read
no matter how large it is. Old IO wraps new IO for sockets (but not files) so old IO is effected
as well.
> Even If you are using Buffered{Input | Output}Stream you can end up passing a large byte
array to NIO. The byte array read method will pass the array to NIO directly if it is larger
than the internal buffer.  
> Passing large cells between nodes as part of intra-cluster messaging can cause the NIO
pooled buffers to quickly reach a high watermark and stay there. This ends up costing 2x the
largest cell size because there is a buffer for input and output since they are different
threads. This is further multiplied by the number of nodes in the cluster - 1 since each has
a dedicated thread pair with separate thread locals.
> Anecdotally it appears that the cost is doubled beyond that although it isn't clear why.
Possibly the control connections or possibly there is some way in which multiple 
> Need a workload in CI that tests the advertised limits of cells on a cluster. It would
be reasonable to ratchet down the max direct memory for the test to trigger failures if a
memory pooling issue is introduced. I don't think we need to test concurrently pulling in
a lot of them, but it should at least work serially.
> The obvious fix to address this issue would be to read in smaller chunks when dealing
with large values. I think small should still be relatively large (4 megabytes) so that code
that is reading from a disk can amortize the cost of a seek. It can be hard to tell what the
underlying thing being read from is going to be in some of the contexts where we might choose
to implement switching to reading chunks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message