cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benjamin Coverston (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-1608) Redesigned Compaction
Date Tue, 14 Jun 2011 07:24:47 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Benjamin Coverston updated CASSANDRA-1608:
------------------------------------------

    Attachment: 0001-leveldb-style-compaction.patch

Adding a patch for leveldb style compaction. I see this as a 'good start' and I'm looking
for some further input. I'm not going to be able to work on this for the next week or so so
I'm putting it here to start some discussion on this approach.

This implementation requires no durable manifest.

Ranges are created at SSTable creation (flush or compaction) or sstable index creation.

Exponent used for levels is 10.

Preliminary runs show that high write rates do make level 0 to level 1 promotions back up
substantially, but when cleared promotions out of level one seem to be very fast.

I found the best performance by removing the compaction throughput throttling and setting
concurrent compactors to 1.

The SSTable size in this implementation is determined by the flush size in mb setting.

The recovery path reads the list of SSTables, groups them by non-overlapping ranges then places
each range in its appropriate level.

Finally credit is due to the leveldb team as this design was inspired by the leveldb implementation.

> Redesigned Compaction
> ---------------------
>
>                 Key: CASSANDRA-1608
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chris Goffinet
>         Attachments: 0001-leveldb-style-compaction.patch
>
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more thinking on
this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the moment, compaction
is kicked off based on a write access pattern, not read access pattern. In most cases, you
want the opposite. You want to be able to track how well each SSTable is performing in the
system. If we were to keep statistics in-memory of each SSTable, prioritize them based on
most accessed, and bloom filter hit/miss ratios, we could intelligently group sstables that
are being read most often and schedule them for compaction. We could also schedule lower priority
maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives us the ability
to  better utilize our bloom filters in a predictable manner. At the moment after a certain
size, the bloom filters become less reliable. This would also allow us to group data most
accessed. Currently the size of an SSTable can grow to a point where large portions of the
data might not actually be accessed as often.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message