cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stefania (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-9658) Re-enable memory-mapped index file reads on Windows
Date Mon, 29 Jun 2015 04:03:05 GMT


Stefania commented on CASSANDRA-9658:

I've run an additional test on cperf and here are [the results|]
on [blade_11_b|]. The difference between standard and
mmap on trunk is about 55k (229,213 vs 175,327) confirming what's already observed in the
previous tests. However 8894 reduces the difference somewhat (230,555 vs 207,208). 

What was the difference when you last tested?

The 8894 branch is based on trunk but has the latest page alignment optimizations, CASSANDRA-8894,
which are dependent on the page aligned buffers, CASSANDRA-8897, already on trunk but not
in 2.2. I'm happy to spend more time to see if there are further optimization to reduce this
difference or fix any regressions that contributed to increasing it in the first place.

The cleanup ticket that removes temporary descriptors, CASSANDRA-7066, is actually targeted
to trunk only, not 2.2. Is this the ticket we need to re-enable mmap on windows (I seem to
recall this is the case from a comment posted there) or are CASSANDRA-8893 and CASSANDRA-8984

> Re-enable memory-mapped index file reads on Windows
> ---------------------------------------------------
>                 Key: CASSANDRA-9658
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Joshua McKenzie
>            Assignee: Joshua McKenzie
>              Labels: Windows, performance
>             Fix For: 2.2.x
> It appears that the impact of buffered vs. memory-mapped index file reads has changed
dramatically since last I tested. [Here's some results on various platforms we pulled together
yesterday w/2.2-HEAD|].
> TL;DR: On linux we see a 40% hit in performance from 108k ops/sec on reads to 64.8k ops/sec.
While surprising in itself, the really unexpected result (to me) is on Windows - with standard
access we're getting 16.8k ops/second on our bare-metal perf boxes vs. 184.7k ops/sec with
memory-mapped index files, an over 10-fold increase in throughput. While testing w/standard
access, CPU's on the stress machine and C* node are both sitting < 4%, network doesn't
appear bottlenecked, resource monitor doesn't show anything interesting, and performance counters
in the kernel show very little. Changes in thread count simply serve to increase median latency
w/out impacting any other visible metric that we're measuring, so I'm at a loss as to why
the disparity is so huge on the platform.
> The combination of my changes to get the 2.1 branch to behave on Windows along with [~benedict]
and [~Stefania]'s changes in lifecycle and cleanup patterns on 2.2 should hopefully have us
in a state where transitioning back to using memory-mapped I/O on Windows will only cause
trouble on snapshot deletion. Fairly simple runs of stress w/compaction aren't popping up
any obvious errors on file access or renaming - I'm going to do some much heavier testing
(ccm multi-node clusters, long stress w/repair and compaction, etc) and see if there's any
outstanding issues that need to be stamped out to call mmap'ed index files on Windows safe.
The one thing we'll never be able to support is deletion of snapshots while a node is running
and sstables are mapped, but for a > 10x throughput increase I think users would be willing
to make that sacrifice.
> The combination of the powercfg profile change, the kernel timer resolution, and memory-mapped
index files are giving some pretty interesting performance numbers on EC2.

This message was sent by Atlassian JIRA

View raw message