cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benedict (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)
Date Tue, 01 Sep 2015 07:18:47 GMT


Benedict commented on CASSANDRA-8630:

I must admit that I thought, from Ariel's comment [here|]
that we did not actually use {{FBO.copy anymore}}, and that it did not work. I guess there
was some other mistake happening there.

However there's no functional distinction between the two methods, since they both operate
on a target {{byte[]}}, and as such the {{FastByteOperations.copy}} methods support an array
as a target, so I've pushed a version with that changed.

It's not clear how much of the variance is cstar's current inconsistency. I'm reasonably certain
that hotspot translates any byte-by-byte copy to a SIMD optimised one. However looking at
the C2 compilation output, it appears that the {{FastByteOperations.copy}} call is fully inlined,
whereas for some reason the {{ByteBuffer.get}} call is left as invokevirtual. This is odd,
since this should at most be bimorphic, and I would expect to be a main target for optimisation
by the VM. However I cannot see anywhere in hotspot's intrinsic definition any of the {{ByteBuffer.get}}
methods either (whereas copyMemory most certainly is), which would have explained this.

Given this, we're probably best retaining the {{FBO.copy}} version, however we may as well
port it over to {{read}}

> Faster sequential IO (on compaction, streaming, etc)
> ----------------------------------------------------
>                 Key: CASSANDRA-8630
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core, Tools
>            Reporter: Oleg Anastasyev
>            Assignee: Stefania
>              Labels: compaction, performance
>             Fix For: 3.x
>         Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, flight_recorder_001_files.tar.gz,
flight_recorder_002_files.tar.gz, mmaped_uncomp_hotspot.png
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot of CPU is
lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as their matching
write* are implemented with numerous calls of byte by byte read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in either way gives
8x speed increase.
> A patch attached implements<Type> and SequencialWriter.write<Type>
methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and ColumnNameHelper.maxComponents,
which were on my profiler's hotspot method list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% faster  on uncompressed
sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU load - i.e.
compaction; yellow is user - i.e. not compaction related tasks)

This message was sent by Atlassian JIRA

View raw message