cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "T Jake Luciani (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-7443) SSTable Pluggability v2
Date Wed, 30 Jul 2014 16:50:40 GMT


T Jake Luciani commented on CASSANDRA-7443:

bq. Streaming may need a bit more work/thought - seems that compression format of sstables
is coupled quite tightly with compression stream writer, and also assumes we can stream a
single file range. Might not want to impose that requirement (we probably don't for 3.0 format
- a set of ranges is more likely, but I would prefer to abstract the concept of a stream Chunk
to be format specific anyway, to remove the coupling)

I'm working on this now and the simplest solution might be to write the stream to disk first
and then use a RAF to process it.  If we use row groups you would still need to send all the
data in the partition, the issue is that you can't process it sequentially.

> SSTable Pluggability v2
> -----------------------
>                 Key: CASSANDRA-7443
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: T Jake Luciani
>            Assignee: T Jake Luciani
>             Fix For: 3.0
> As part of a wider effort to improve the performance of our storage engine we will need
to support basic pluggability of the SSTable reader/writer. We primarily need this to support
the current SSTable format and new SSTable format in the same version.  This will also let
us encapsulate the changes in a single layer vs forcing the whole engine to change at once.
> We previously discussed how to accomplish this in CASSANDRA-3067

This message was sent by Atlassian JIRA

View raw message