cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ariel Weisberg (JIRA)" <j...@apache.org>
Subject [jira] [Reopened] (CASSANDRA-12071) Regression in flushing throughput under load after CASSANDRA-6696
Date Fri, 29 Jul 2016 22:04:21 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Ariel Weisberg reopened CASSANDRA-12071:
----------------------------------------

Turns out this is still a problem because the usage of an unbounded LinkedBlockingQueue means
that TPE will never actually spin up additional threads.

You can see that this was necessary for CASSANDRA-2178 as well.

> Regression in flushing throughput under load after CASSANDRA-6696
> -----------------------------------------------------------------
>
>                 Key: CASSANDRA-12071
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12071
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Local Write-Read Paths
>            Reporter: Ariel Weisberg
>            Assignee: Marcus Eriksson
>             Fix For: 3.8
>
>
> The way flushing used to work is that a ColumnFamilyStore could have multiple Memtables
flushing at once and multiple ColumnFamilyStores could flush at the same time. The way it
works now there can be only a single flush of any ColumnFamilyStore & Memtable running
in the C* process, and the number of threads applied to that flush is bounded by the number
of disks in JBOD.
> This works ok most of the time but occasionally flushing will be a little slower and
ingest will outstrip it and then block on available memory. At this point you see several
second stalls that cause timeouts.
> This is a problem for reasonable configurations that don't use JBOD but have access to
a fast disk that can handle some IO queuing (RAID, SSD).
> You can reproduce on beefy hardware (12 cores 24 threads, 64 gigs of RAM, SSD) if you
unthrottle compaction or set it to something like 64 megabytes/second and run with 8 compaction
threads and stress with the default write workload and a reasonable number of threads. I tested
with 96.
> It started happening after about 60 gigabytes of data was loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message