cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stefania (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)
Date Tue, 04 Aug 2015 07:53:06 GMT


Stefania commented on CASSANDRA-8630:

[~benedict], sorry for the delay, I finally find the time to get back into this. I already
moved the mmap segments into the RAR and made {{MemoryInputStream}} extend {{NIODataInputStream}}.

Just to make sure I understood you correctly before I carry on with the trickier part, making
{{RandomAccessReader}} extend {{NIODataInputStream}} requires changing the way {{NIODataInputStream}}
reads data in that we cannot afford to have any left over bytes in the buffer before calling
{{readNext}} as this would not work for mmaped segments. I guess this is the whole point of
the optimization, the fast and slow paths get implemented in {{NIODataInputStream}} and then
the RAR just implements {{FileDataInput}} and overrides readNext() by either refilling the
whole buffer with a page aligned read or swapping in a memory mapped segment. This requires
the buffer in {{NIODataInputStream}} to be protected rather than private and not final.

Is my understanding correct?

> Faster sequential IO (on compaction, streaming, etc)
> ----------------------------------------------------
>                 Key: CASSANDRA-8630
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core, Tools
>            Reporter: Oleg Anastasyev
>            Assignee: Stefania
>              Labels: compaction, performance
>             Fix For: 3.x
>         Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, flight_recorder_001_files.tar.gz
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot of CPU is
lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as their matching
write* are implemented with numerous calls of byte by byte read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in either way gives
8x speed increase.
> A patch attached implements<Type> and SequencialWriter.write<Type>
methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and ColumnNameHelper.maxComponents,
which were on my profiler's hotspot method list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% faster  on uncompressed
sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU load - i.e.
compaction; yellow is user - i.e. not compaction related tasks)

This message was sent by Atlassian JIRA

View raw message