cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benedict (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-6916) Preemptive opening of compaction result
Date Tue, 22 Apr 2014 15:12:16 GMT


Benedict commented on CASSANDRA-6916:

bq. when doing anticompaction, should we not clean up old readers? (repairedSSTableWriter.finish(false,

We have two writers open with the *same* readers here, so we only close the readers when we
finish the second writer. It's a bit clunky, I know, but it's not a common occurrence to be
rewriting into two places. I'll add comments.

bq. do we need to move starts back in SSTableRewriter.resetAndTruncate()? If we resetAndTruncate
right after doing early opening, i think we could create a gap between the start in the compacting
file and the end in the written one

We always open before performing any append, and open with exclusive upper bounds, i.e. we
should only ever truncate back to a position we're still safe at. That said, this is definitely
worthy of a comment or two.

I've copied your tweaks and also added another fix: wasn't dealing with marking compacted
correctly everywhere. Need to mark the compaction finished before deleting the old files,
and mark the preemptively opened reader as compacting before adding it to the live set to
avoid it being compacted before it's written (this was in the earlier patches but dropped
out somewhere along the way) 

Also, I left the new HashSet() inside of the Rewriter out, as I want the provided set to be
mutable, so that the users of the rewriter can have access to the currently extant versions.
It's not actually necessary anywhere, but I think it will prevent future surprises.

I've uploaded to the repository, but intend to add some comments to the places we mentioned.
The functional stuff should be static now, if you're happy with it.

> Preemptive opening of compaction result
> ---------------------------------------
>                 Key: CASSANDRA-6916
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Benedict
>            Assignee: Benedict
>            Priority: Minor
>              Labels: performance
>             Fix For: 2.1 beta2
>         Attachments: 6916-stock2_1.mixed.cache_tweaks.tar.gz, 6916-stock2_1.mixed.logs.tar.gz,
6916v3-preempive-open-compact.logs.gz, 6916v3-preempive-open-compact.mixed.2.logs.tar.gz,
> Related to CASSANDRA-6812, but a little simpler: when compacting, we mess quite badly
with the page cache. One thing we can do to mitigate this problem is to use the sstable we're
writing before we've finished writing it, and to drop the regions from the old sstables from
the page cache as soon as the new sstables have them (even if they're only written to the
page cache). This should minimise any page cache churn, as the old sstables must be larger
than the new sstable, and since both will be in memory, dropping the old sstables is at least
as good as dropping the new.
> The approach is quite straight-forward. Every X MB written:
> # grab flushed length of index file;
> # grab second to last index summary record, after excluding those that point to positions
after the flushed length;
> # open index file, and check that our last record doesn't occur outside of the flushed
length of the data file (pretty unlikely)
> # Open the sstable with the calculated upper bound
> Some complications:
> # must keep running copy of compression metadata for reopening with
> # we need to be able to replace an sstable with itself but a different lower bound
> # we need to drop the old page cache only when readers have finished

This message was sent by Atlassian JIRA

View raw message