cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-1608) Redesigned Compaction
Date Fri, 26 Aug 2011 19:33:29 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13091974#comment-13091974
] 

Jonathan Ellis commented on CASSANDRA-1608:
-------------------------------------------

Committed with some changes:
- Switched DataTracker registration to be eager instead of lazy -- otherwise sstables created
before any compaction happened (during log replay for instance) would not be added to the
manifest.  Also added unregistration when a new Strategy is created on schema change.
- added some debug logging to Manifest
- moved the LevelDB classes from db.leveldb to db.compaction, so that I could add a "unqualified
compaction strategy will be looked for in oac.db.compaction" rule
- renamed LevelDB* to Leveled*
- ran dos2unix on the intervaltree classes and reformatted
- added code to make it not attempt to compact empty L0, which caused an assertion error
- Added getUnleveledSSTables to CFSMBean

Still to do:
- CASSANDRA-3085

> Redesigned Compaction
> ---------------------
>
>                 Key: CASSANDRA-1608
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chris Goffinet
>            Assignee: Benjamin Coverston
>         Attachments: 1608-22082011.txt, 1608-v2.txt, 1608-v4.txt, 1608-v5.txt
>
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more thinking on
this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the moment, compaction
is kicked off based on a write access pattern, not read access pattern. In most cases, you
want the opposite. You want to be able to track how well each SSTable is performing in the
system. If we were to keep statistics in-memory of each SSTable, prioritize them based on
most accessed, and bloom filter hit/miss ratios, we could intelligently group sstables that
are being read most often and schedule them for compaction. We could also schedule lower priority
maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives us the ability
to  better utilize our bloom filters in a predictable manner. At the moment after a certain
size, the bloom filters become less reliable. This would also allow us to group data most
accessed. Currently the size of an SSTable can grow to a point where large portions of the
data might not actually be accessed as often.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message