cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Roland Gude (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-4670) LeveledCompaction destroys secondary indexes
Date Fri, 21 Sep 2012 08:47:08 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Roland Gude updated CASSANDRA-4670:
-----------------------------------

    Attachment: compaction2.log

compaction2.log contains a little bit more info. still limited to lines about the relevant
index - anything that was in the logs near by the compaction stuff.

                
> LeveledCompaction destroys secondary indexes
> --------------------------------------------
>
>                 Key: CASSANDRA-4670
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4670
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 1.1.4, 1.1.5
>            Reporter: Roland Gude
>         Attachments: compaction2.log, compaction.log
>
>
> When LeveledCompactionStrategy is active on a ColumnFamily with an Index enabled on TTL
Columns, the Index is not working correctly, because the compaction is throwing away index
data very aggressively.
> Steps to reproduce:
> create a cluster  with a columnfamily with an indexed column and leveled compaction:
> create column family CorruptIndex
>   with column_type = 'Standard'
>   and comparator = 'TimeUUIDType'
>   and default_validation_class = 'BytesType'
>   and key_validation_class = 'BytesType'
>   and read_repair_chance = 0.5
>   and dclocal_read_repair_chance = 0.0
>   and gc_grace = 864000
>   and min_compaction_threshold = 4
>   and max_compaction_threshold = 32
>   and replicate_on_write = true
>   and compaction_strategy = 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
>   and caching = 'NONE'
>   and column_metadata = [
>     {column_name : '00000003-0000-1000-0000-000000000000',
>     validation_class : BytesType,
>     index_name : 'idx_corrupt',
>     index_type : 0}];
> in that cf insert expiring data (expiration date should be in the far future for the
sake of this test)
> query the data by index:
> get CorruptIndex where 00000003-0000-1000-0000-000000000000=utf8(‘value’)
> see results (should be correct for some time)
> wait for leveled compaction to compact the index
> query the data by index:
> get CorruptIndex where 00000003-0000-1000-0000-000000000000=utf8(‘value’)
> see results (are empty)
> trigger rebuild index via nodetool
> query the data by index:
> get CorruptIndex where 00000003-0000-1000-0000-000000000000=utf8(‘value’)
> should be corretc again
> wait for leveled compaction to compact the index
> query the data by index:
> get CorruptIndex where 00000003-0000-1000-0000-000000000000=utf8(‘value’)
> see results (are empty)
> repeat until bored

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message