cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tomas Ramanauskas (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers
Date Tue, 01 Dec 2015 16:23:11 GMT


Tomas Ramanauskas commented on CASSANDRA-7066:

Hi, I created a new issue:

This is a bit of code that is throwing java.lang.NullPointerException: null

     * Check data directories for old files that can be removed when migrating from 2.1 or
2.2 to 3.0,
     * these checks can be removed in 4.0, see CASSANDRA-7066
    public static void migrateDataDirs()
        Iterable<String> dirs = Arrays.asList(DatabaseDescriptor.getAllDataFileLocations());
        for (String dataDir : dirs)
            logger.trace("Checking directory {} for old files", dataDir);
            File dir = new File(dataDir);
            assert dir.exists() : dir + " should have been created by startup checks";

            for (File ksdir : dir.listFiles((d, n) -> d.isDirectory()))
                for (File cfdir : ksdir.listFiles((d, n) -> d.isDirectory()))
                    if (Descriptor.isLegacyFile(cfdir))
                        FileUtils.delete(cfdir.listFiles((d, n) -> Descriptor.isLegacyFile(new
File(d, n))));

> Simplify (and unify) cleanup of compaction leftovers
> ----------------------------------------------------
>                 Key: CASSANDRA-7066
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Benedict
>            Assignee: Stefania
>            Priority: Minor
>              Labels: benedict-to-commit, compaction
>             Fix For: 3.0 alpha 1
>         Attachments: 7066.txt
> Currently we manage a list of in-progress compactions in a system table, which we use
to cleanup incomplete compactions when we're done. The problem with this is that 1) it's a
bit clunky (and leaves us in positions where we can unnecessarily cleanup completed files,
or conversely not cleanup files that have been superceded); and 2) it's only used for a regular
compaction - no other compaction types are guarded in the same way, so can result in duplication
if we fail before deleting the replacements.
> I'd like to see each sstable store in its metadata its direct ancestors, and on startup
we simply delete any sstables that occur in the union of all ancestor sets. This way as soon
as we finish writing we're capable of cleaning up any leftovers, so we never get duplication.
It's also much easier to reason about.

This message was sent by Atlassian JIRA

View raw message