cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sylvain Lebresne (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-6431) Prevent same CF from being enqueued to flush more than once
Date Tue, 03 Dec 2013 08:13:36 GMT


Sylvain Lebresne commented on CASSANDRA-6431:

I might misunderstand that suggestion but I would say that the fact the block writes if writes
gets in faster than we're able to flush is "a feature" (to avoid OOM), even if all writes
goes to the same sstable. That is, it could be that we're too agressive in blocking writes
in some cases because our heuristic for "writes are faster than we can flush" is not good
enough, but it's not entirely clear to me what not queuing 2 memtables for the same CF achieve
(outside potentially having the memtable we don't queue grow unbounded and OOMing us that

> Prevent same CF from being enqueued to flush more than once
> -----------------------------------------------------------
>                 Key: CASSANDRA-6431
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Benedict
>            Assignee: Benedict
>            Priority: Minor
> As things stand we can, in certain circumstances, fill up the flush queue with multiple
requests to flush the same CF, which will lead to all writes blocking until the CF is flushed.
Ideally we would only enqueue each CF/Memtable once and, if requested to be flushed whilst
already enqueued, mark it to be requeued once the outstanding flush completes.
> On a related note, a single table can already block writes if it has <flush queue
size> or more secondary indexes. At the same time it might be worth deciding if this is
also a problem and address it.

This message was sent by Atlassian JIRA

View raw message