cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stefania (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)
Date Tue, 18 Aug 2015 08:46:47 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14700948#comment-14700948
] 

Stefania commented on CASSANDRA-8630:
-------------------------------------

The slowness with the uncompressed mmaped segments is caused by the rate limiter, which ultimately
comes from the compaction throughput, see _mmaped_uncomp_hotspot.png_ attached. Whereas before
we were simply looping on a sorted list of mmaped segments and returning a {{ByteBufferDataInput}}
for each one of them, now we have a sorted map of segments that are swapped in or out by the
RAR rebuffer method. Because previously we would apply the rate limiter to the rebuffer method,
mmaped segments became much slower. 

If we apply the rate limiter only just before reading, as opposite to every time rebuffer
is called, here are the results:

||Version||Run 1||Run 2||Run 3||Rounded AVG||
|8630 comp|17.48|16.77|16.26|17|
|8630 uncomp|15.51|17.5|17.7|17|
|TRUNK comp|17.95|17.64|17.72|18|
|TRUNK uncomp|20.81|20.01|18.81|20|

I am not sure I understand fully why the compressed case was not affected as much, these segments
are pretty big also for the uncompressed case. I also would like to know if there is a way
to have flight recorder look at the total time rather than just the CPU time, without [visual
vm|https://visualvm.java.net] I would not have been able to find this.

> Faster sequential IO (on compaction, streaming, etc)
> ----------------------------------------------------
>
>                 Key: CASSANDRA-8630
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core, Tools
>            Reporter: Oleg Anastasyev
>            Assignee: Stefania
>              Labels: compaction, performance
>             Fix For: 3.x
>
>         Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, flight_recorder_001_files.tar.gz,
flight_recorder_002_files.tar.gz, mmaped_uncomp_hotspot.png
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot of CPU is
lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as their matching
write* are implemented with numerous calls of byte by byte read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in either way gives
8x speed increase.
> A patch attached implements RandomAccessReader.read<Type> and SequencialWriter.write<Type>
methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and ColumnNameHelper.maxComponents,
which were on my profiler's hotspot method list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% faster  on uncompressed
sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU load - i.e.
compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message