cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benjamin Coverston (JIRA)" <j...@apache.org>
Subject [jira] [Issue Comment Edited] (CASSANDRA-1608) Redesigned Compaction
Date Sat, 28 May 2011 17:11:47 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13040541#comment-13040541
] 

Benjamin Coverston edited comment on CASSANDRA-1608 at 5/28/11 5:11 PM:
------------------------------------------------------------------------

There's probably nothing that prevents us from doing that. Is our goal here to replace compaction
entirely?

The manifest information consists, minimally of the level information and ranges. For us ranges
are easy as they are readily available when the SSTables are read in at restart, flushing,
or compaction.

Taking a stab at this I made the compaction manager abstract, then created a concrete implementation
for the current compaction implementation. Happily hacking on a level based compaction manager
I kept running into a delemma: Where do I store the level information. There are a few options:

1. The descriptor A hack, simple, but also adds information that probably wouldn't be used
by any other compaction manager, yet it would be there. Unless we're moving head-long into
a level-db approach I'm not super excited about this.

2. Store it on a per-sstable basis -in- the sstable: To continue along this path I would like
to have a standard place to put "extra" metadata in the sstables. A header of some sort. I
like the idea of using a metadata block in the SSTables to store this type of information.

3. Use an on-disk manifest. -- Pro: only my compaction manager needs to deal with this information,
but there is a non-trivial amount of bookeeping that would need to be done to ensure this
is kept up to day and valid.

EDIT:
4. This is probably the best option, create a new component type: METADATA_STORE which will
hold namespaced key/value pairs on a per-sstable basis.

      was (Author: bcoverston):
    There's probably nothing that prevents us from doing that. Is our goal here to replace
compaction entirely?

The manifest information consists, minimally of the level information and ranges. For us ranges
are easy as they are readily available when the SSTables are read in at restart, flushing,
or compaction.

Taking a stab at this I made the compaction manager abstract, then created a concrete implementation
for the current compaction implementation. Happily hacking on a level based compaction manager
I kept running into a delemma: Where do I store the level information. There are a few options:

1. The descriptor A hack, simple, but also adds information that probably wouldn't be used
by any other compaction manager, yet it would be there. Unless we're moving head-long into
a level-db approach I'm not super excited about this.

2. Store it on a per-sstable basis -in- the sstable: To continue along this path I would like
to have a standard place to put "extra" metadata in the sstables. A header of some sort. I
like the idea of using a metadata block in the SSTables to store this type of information.

3. Use an on-disk manifest. -- Pro: only my compaction manager needs to deal with this information,
but there is a non-trivial amount of bookeeping that would need to be done to ensure this
is kept up to day and valid.

  
> Redesigned Compaction
> ---------------------
>
>                 Key: CASSANDRA-1608
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chris Goffinet
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more thinking on
this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the moment, compaction
is kicked off based on a write access pattern, not read access pattern. In most cases, you
want the opposite. You want to be able to track how well each SSTable is performing in the
system. If we were to keep statistics in-memory of each SSTable, prioritize them based on
most accessed, and bloom filter hit/miss ratios, we could intelligently group sstables that
are being read most often and schedule them for compaction. We could also schedule lower priority
maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives us the ability
to  better utilize our bloom filters in a predictable manner. At the moment after a certain
size, the bloom filters become less reliable. This would also allow us to group data most
accessed. Currently the size of an SSTable can grow to a point where large portions of the
data might not actually be accessed as often.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message