cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Randy Fradin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data
Date Tue, 10 Feb 2015 17:33:12 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314498#comment-14314498
] 

Randy Fradin commented on CASSANDRA-8719:
-----------------------------------------

>From cassandra-cli on a fresh 1-node cluster running 2.1.0 with offheap_objects and hsha
enabled:

{quote}
create keyspace ks1 with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
and strategy_options = {replication_factor:1};
use ks1;
create column family cf1
    with comparator = 'AsciiType'
    and key_validation_class = 'AsciiType'
    and default_validation_class = 'AsciiType'
    and column_type = 'Standard'
    and gc_grace = 864000
    and read_repair_chance = 1.0
    and compaction_strategy = 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
    and compaction_strategy_options = {sstable_size_in_mb:160}
    and compression_options = {sstable_compression:SnappyCompressor, chunk_length_kb:64};
set cf1['111']['1'] = 123;
set cf1['111']['2'] = 456;
set cf1['222']['1'] = 321;
set cf1['222']['2'] = 654;
{quote}

At this point I run nodetool flush. Then back in cassandra-cli:

{quote}
set cf1['333']['1'] = 789;
set cf1['333']['2'] = abc;
set cf1['444']['1'] = 987;
set cf1['444']['2'] = cba;
{quote}

Then nodetool flush again.

Here are the files in the ks1/cf1-63901340b13f11e48e36730c68dbc38c directory:
ks1-cf1-ka-1-CompressionInfo.db
ks1-cf1-ka-1-Data.db
ks1-cf1-ka-1-Digest.sha1
ks1-cf1-ka-1-Filter.db
ks1-cf1-ka-1-Index.db
ks1-cf1-ka-1-Statistics.db
ks1-cf1-ka-1-Summary.db
ks1-cf1-ka-1-TOC.txt
ks1-cf1-ka-2-CompressionInfo.db
ks1-cf1-ka-2-Data.db
ks1-cf1-ka-2-Digest.sha1
ks1-cf1-ka-2-Filter.db
ks1-cf1-ka-2-Index.db
ks1-cf1-ka-2-Statistics.db
ks1-cf1-ka-2-Summary.db
ks1-cf1-ka-2-TOC.txt

And I see this in the log from the second flush onward:
{quote}
 INFO [RMI TCP Connection(6)-45.10.96.124] 2015-02-10 11:12:37,596 ColumnFamilyStore.java
(line 856) Enqueuing flush of cf1: 584 (0%) on-heap, 166 (0%) off-heap
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,597 Memtable.java (line 326) Writing Memtable-cf1@114791717(72
serialized bytes, 4 ops, 0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,600 Memtable.java (line 360) Completed flushing
/tmp/cassandra/data/ks1/cf1-63901340b13f11e48e36730c68dbc38c/ks1-cf1-ka-2-Data.db (67 bytes)
for commitlog position ReplayPosition(segmentId=1423583589486, position=4428)
 INFO [RMI TCP Connection(6)-45.10.96.124] 2015-02-10 11:12:37,603 ColumnFamilyStore.java
(line 856) Enqueuing flush of compaction_history: 412 (0%) on-heap, 328 (0%) off-heap
 INFO [CompactionExecutor:10] 2015-02-10 11:12:37,605 ColumnFamilyStore.java (line 856) Enqueuing
flush of compactions_in_progress: 356 (0%) on-heap, 175 (0%) off-heap
 INFO [MemtableFlushWriter:8] 2015-02-10 11:12:37,605 Memtable.java (line 326) Writing Memtable-compaction_history@1602711153(253
serialized bytes, 9 ops, 0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,606 Memtable.java (line 326) Writing Memtable-compactions_in_progress@468406472(117
serialized bytes, 7 ops, 0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,609 Memtable.java (line 360) Completed flushing
/tmp/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-1-Data.db
(145 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, position=4685)
 INFO [MemtableFlushWriter:8] 2015-02-10 11:12:37,611 Memtable.java (line 360) Completed flushing
/tmp/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/system-compaction_history-ka-2-Data.db
(239 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, position=4428)
 INFO [CompactionExecutor:10] 2015-02-10 11:12:37,612 CompactionTask.java (line 138) Compacting
[SSTableReader(path='/tmp/cassandra/data/ks1/cf1-63901340b13f11e48e36730c68dbc38c/ks1-cf1-ka-1-Data.db'),
SSTableReader(path='/tmp/cassandra/data/ks1/cf1-63901340b13f11e48e36730c68dbc38c/ks1-cf1-ka-2-Data.db')]
 INFO [CompactionExecutor:10] 2015-02-10 11:12:37,647 ColumnFamilyStore.java (line 856) Enqueuing
flush of compactions_in_progress: 212 (0%) on-heap, 20 (0%) off-heap
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,648 Memtable.java (line 326) Writing Memtable-compactions_in_progress@359614352(0
serialized bytes, 1 ops, 0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:9] 2015-02-10 11:12:37,650 Memtable.java (line 360) Completed flushing
/tmp/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
(42 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, position=4756)
ERROR [CompactionExecutor:10] 2015-02-10 11:12:37,655 CassandraDaemon.java (line 166) Exception
in thread Thread[CompactionExecutor:10,1,RMI Runtime]
java.lang.RuntimeException: Last written key DecoratedKey(135414706852123109601202796181083437879,
800100) >= current key DecoratedKey(135414706852123109601202796181083437879, 800100) writing
into /tmp/cassandra/data/ks1/cf1-63901340b13f11e48e36730c68dbc38c/ks1-cf1-tmp-ka-3-Data.db
        at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
~[apache-cassandra-2.1.0.jar:2.1.0]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_40]
        at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_40]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
~[na:1.7.0_40]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_40]
        at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
 INFO [ScheduledTasks:1] 2015-02-10 12:12:16,529 ColumnFamilyStore.java (line 856) Enqueuing
flush of sstable_activity: 900 (0%) on-heap, 4183 (0%) off-heap
 INFO [MemtableFlushWriter:10] 2015-02-10 12:12:16,530 Memtable.java (line 326) Writing Memtable-sstable_activity@2019899836(270
serialized bytes, 108 ops, 0%/0% of on/off-heap limit)
 INFO [MemtableFlushWriter:10] 2015-02-10 12:12:16,534 Memtable.java (line 360) Completed
flushing /tmp/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-2-Data.db
(194 bytes) for commitlog position ReplayPosition(segmentId=1423583589486, position=11832)
{quote}

If I do a list back in cassandra-cli:
list cf1;
Using default limit of 100
Using default cell limit of 100
Invalid byte for ascii: -128


> Using thrift HSHA with offheap_objects appears to corrupt data
> --------------------------------------------------------------
>
>                 Key: CASSANDRA-8719
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Randy Fradin
>            Assignee: Marcus Eriksson
>             Fix For: 2.1.3
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is long closed
and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java (line 138) Compacting
[SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java (line 856)
Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 326) Writing
Memtable-compactions_in_progress@1155018639(0 serialized bytes, 1 ops, 0%/0% of on/off-heap
limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 360) Completed
flushing /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
(42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java (line 166)
Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key DecoratedKey(131206587314004820534098544948237170809,
800100010000000c62617463685f6d7574617465000000) >= current key DecoratedKey(14775611966645399672119169777260659240,
726f776b65793030385f31343232323937313537353835) writing into /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
>         at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_40]
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_40]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
~[na:1.7.0_40]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_40]
>         at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the first compaction
that takes place between the first 2 SSTables to get flushed from an initially empty column
family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc server and
*offheap_objects* for memtable allocation. If I switch either to sync or to offheap_buffers
or heap_buffers then I cannot reproduce the problem. Also under the same circumstances I'm
pretty sure I've seen incorrect data being returned to a client multiget_slice request before
any SSTables had been flushed yet, so I presume this is corruption that happens before any
flush/compaction takes place.
> nodetool scrub yielded these errors:
> {quote}
>  INFO [CompactionExecutor:9] 2015-01-26 13:48:01,512 OutputHandler.java (line 42) Scrubbing
SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db')
(168780 bytes)
>  INFO [CompactionExecutor:10] 2015-01-26 13:48:01,512 OutputHandler.java (line 42) Scrubbing
SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')
(135024 bytes)
>  WARN [CompactionExecutor:9] 2015-01-26 13:48:01,531 OutputHandler.java (line 52) Out
of order row detected (DecoratedKey(14775611966645399672119169777260659240, 726f776b65793030385f31343232323937313537353835)
found after DecoratedKey(131206587314004820534098544948237170809, 800100010000000c62617463685f6d7574617465000000))
>  WARN [CompactionExecutor:9] 2015-01-26 13:48:01,534 OutputHandler.java (line 57) Error
reading row (stacktrace follows):
> java.lang.RuntimeException: Last written key DecoratedKey(131206587314004820534098544948237170809,
800100010000000c62617463685f6d7574617465000000) >= current key DecoratedKey(131206587314004820534098544948237170809,
800100010000000c62617463685f6d7574617465000000) writing into /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-4-Data.db
>         at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.io.sstable.SSTableRewriter.tryAppend(SSTableRewriter.java:141)
~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:186) ~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:592)
[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:100)
[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.CompactionManager$3.execute(CompactionManager.java:315)
[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:270)
[apache-cassandra-2.1.0.jar:2.1.0]
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_40]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_40]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_40]
>         at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
>  WARN [CompactionExecutor:9] 2015-01-26 13:48:01,534 OutputHandler.java (line 52) Row
starting at position 25342 is unreadable; skipping to next
>  WARN [CompactionExecutor:10] 2015-01-26 13:48:01,534 OutputHandler.java (line 52) Out
of order row detected (DecoratedKey(29459452031265566667651334397450214244, 726f776b65793030355f31343232323936393033323837)
found after DecoratedKey(131206587314004820534098544948237170809, 800100010000000c62617463685f6d7574617465000000))
> etc...
> {quote}
> I've able to reliably reproduce by creating a new empty table, writing a few partitions
containing a few rows each, forcing a flush using nodetool, writing some more, and flushing
again, and letting the first compaction happen (using LCS). If I read before flushing, sometimes
I will get zero rows back (with quorum between writes and reads), or corrupt bytes, though
sometimes reading would yield the correct results. I haven't been able to narrow down yet
the cases when the data comes back corrupt vs not, though in the same test case (one set of
of writes) the results of a read appear to be consistent (successive reads are either all
correct or all incorrect).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message