cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benjamin Coverston (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-1608) Redesigned Compaction
Date Fri, 24 Jun 2011 17:21:48 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Benjamin Coverston updated CASSANDRA-1608:
------------------------------------------

    Attachment: 2608-v6.txt

Added a patch with fixed range filters.

With ~1100 sstables average latency is substantially increased (~5-10x). I'm pretty sure that
in order to improve on this well need to implement an interval tree to get a non-linear search
time for overlapping sstables in interval queries.

The problem here is that there aren't any really good RBtree or even binary tree implementations
that I have found in the dependencies that we currently have, and I really don't want to muddy
this ticket up with that effort.

There are some potentially useful structures in UIMA that I can use to base the implementation
of an interval tree off of, but right now I'm leaning toward doing this in a separate ticket.


> Redesigned Compaction
> ---------------------
>
>                 Key: CASSANDRA-1608
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chris Goffinet
>            Assignee: Benjamin Coverston
>         Attachments: 0001-leveldb-style-compaction.patch, 1608-v2.txt, 1608-v3.txt, 1608-v4.txt,
1608-v5.txt, 2608-v6.txt
>
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more thinking on
this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the moment, compaction
is kicked off based on a write access pattern, not read access pattern. In most cases, you
want the opposite. You want to be able to track how well each SSTable is performing in the
system. If we were to keep statistics in-memory of each SSTable, prioritize them based on
most accessed, and bloom filter hit/miss ratios, we could intelligently group sstables that
are being read most often and schedule them for compaction. We could also schedule lower priority
maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives us the ability
to  better utilize our bloom filters in a predictable manner. At the moment after a certain
size, the bloom filters become less reliable. This would also allow us to group data most
accessed. Currently the size of an SSTable can grow to a point where large portions of the
data might not actually be accessed as often.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message