cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Blake Eggleston (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-12730) Thousands of empty SSTables created during repair - TMOF death
Date Tue, 08 Nov 2016 18:24:59 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-12730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15648353#comment-15648353
] 

Blake Eggleston commented on CASSANDRA-12730:
---------------------------------------------

[~pauloricardomg], CASSANDRA-9143 will properly isolate repaired, repair-in-progress, and
unrepaired data for normal tables. I'm not familiar with the details of how MVs work, but
looking at [the relevant parts of StreamReceiveTask|https://github.com/bdeggleston/cassandra/blob/de86ccf3a3b21e406a3e337019c2197bf15d8053/src/java/org/apache/cassandra/streaming/StreamReceiveTask.java#L185-L185],
it _looks_ like repairedAt value on the incoming sstables is basically discarded, which would
explain why [~brstgt] hasn't had much luck using incremental repairs with them. So yeah, for
MVs repairedAt values (and pendingRepair value added in CASSANDRA-9143)  will probably need
to be added to the mutation class or something and taken into consideration when flushed.

> Thousands of empty SSTables created during repair - TMOF death
> --------------------------------------------------------------
>
>                 Key: CASSANDRA-12730
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12730
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Local Write-Read Paths
>            Reporter: Benjamin Roth
>            Priority: Critical
>
> Last night I ran a repair on a keyspace with 7 tables and 4 MVs each containing a few
hundret million records. After a few hours a node died because of "too many open files".
> Normally one would just raise the limit, but: We already set this to 100k. The problem
was that the repair created roughly over 100k SSTables for a certain MV. The strange thing
is that these SSTables had almost no data (like 53bytes, 90bytes, ...). Some of them (<5%)
had a few 100 KB, very few (<1% had normal sizes like >= few MB). I could understand,
that SSTables queue up as they are flushed and not compacted in time but then they should
have at least a few MB (depending on config and avail mem), right?
> Of course then the node runs out of FDs and I guess it is not a good idea to raise the
limit even higher as I expect that this would just create even more empty SSTables before
dying at last.
> Only 1 CF (MV) was affected. All other CFs (also MVs) behave sanely. Empty SSTables have
been created equally over time. 100-150 every minute. Among the empty SSTables there are also
Tables that look normal like having few MBs.
> I didn't see any errors or exceptions in the logs until TMOF occured. Just tons of streams
due to the repair (which I actually run over cs-reaper as subrange, full repairs).
> After having restarted that node (and no more repair running), the number of SSTables
went down again as they are compacted away slowly.
> According to [~zznate] this issue may relate to CASSANDRA-10342 + CASSANDRA-8641



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message