cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dikang Gu (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-13553) Map C* table schema to RocksDB key value data model
Date Mon, 02 Oct 2017 21:46:00 GMT


Dikang Gu commented on CASSANDRA-13553:

[~doanduyhai], we do not need to do read-before-write, any mutation will be a new RocksDB
row, and we will merge the data in read path or during compaction, through RocksDB's merge

> Map C* table schema to RocksDB key value data model
> ---------------------------------------------------
>                 Key: CASSANDRA-13553
>                 URL:
>             Project: Cassandra
>          Issue Type: Sub-task
>          Components: Core
>            Reporter: Dikang Gu
>            Assignee: Dikang Gu
> The goal for this ticket is to find a way to map Cassandra's table data model to RocksDB's
key value data model.
> To support most common C* queries on top of RocksDB, we plan to use this strategy, for
each row in Cassandra:
> 1. Encode Cassandra partition key + clustering keys into RocksDB key.
> 2. Encode rest of Cassandra columns into RocksDB value.
> With this approach, there are two major problems we need to solve:
> 1. After we encode C* keys into RocksDB key, we need to preserve the same sorting order
in RocksDB byte comparator, as in original data type.
> 2. Support timestamp, ttl, and tombestone on the values.
> To solve problem 1, we need to carefully design the encoding algorithm for each data
type. Fortunately, there are some existing libraries we can play with, such as orderly (,
which is used by HBase. Or flatbuffer (
> To solve problem 2, our plan is to encode C* timestamp, ttl, and tombestone together
with the values, and then use RocksDB's merge operator/compaction filter to merge different
version of data, and handle ttl/tombestones. 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message