cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (Updated) (JIRA)" <>
Subject [jira] [Updated] (CASSANDRA-3468) SStable data corruption in 1.0.x
Date Fri, 20 Jan 2012 04:01:39 GMT


Jonathan Ellis updated CASSANDRA-3468:

    Attachment: 3468-assert.txt

Andy confirms that supercolumns are not involved.

I do note that the stacktrace corresponds to a counter column in 1.0.0.  We've had several
counter-related bug fixes since then.  (If you're NOT using counters, then the corruption
must have happened mid-column, which would be interesting.)

Patch adding more assertions to make sure we're not reading or writing to unallocated native
memory attached.

Can you test JNA + SerializingCache with 1.0.7 + this patch?  I'd like to see if there are
any assertion or other error messages before compaction EOFs.

One other thing to try: turn on snapshot_before_compaction in cassandra.yaml.  Then, when
you have a compaction or scrub error out, check the logs to see where that corrupt sstable
came from.  If a freshly flushed sstable is corrupt, that's going to narrow down our search
vs corruption coming from a cached row of an existing sstable.  (As an optimization, compaction
will use a cached version of the row if one is present, instead of re-reading its sources
from disk.)
> SStable data corruption in 1.0.x
> --------------------------------
>                 Key: CASSANDRA-3468
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.0.0
>         Environment: RHEL 6 running Cassandra 1.0.x.
>            Reporter: Terry Cumaranatunge
>              Labels: patch
>         Attachments: 3468-assert.txt
> We have noticed several instances of sstable corruptions in 1.0.x. This has occurred
in 1.0.0-rcx and 1.0.0 and 1.0.1. It has happened on multiple nodes and multiple hosts with
different disks, so this is the reason the software is suspected at this time. The file system
used is XFS, but no resets or any type of failure scenarios have been run to create the problem.
We were basically running under load and every so often, we see that the sstable gets corrupted
and compaction stops on that node.
> I will attach the relevant sstable files if it lets me do that when I create this ticket.
> ERROR [CompactionExecutor:23] 2011-10-27 11:14:09,309 (line 119)
Skipping row DecoratedKey(128013852116656632841539411062933532114, 37303730303138313533) in
>         at
>         at
>         at org.apache.cassandra.utils.BytesReadTracker.readFully(
>         at
>         at org.apache.cassandra.utils.ByteBufferUtil.readWithLength(
>         at org.apache.cassandra.db.ColumnSerializer.deserialize(
>         at org.apache.cassandra.db.ColumnSerializer.deserialize(
>         at org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(
>         at
>         at org.apache.cassandra.db.compaction.PrecompactedRow.merge(
>         at org.apache.cassandra.db.compaction.PrecompactedRow.<init>(
>         at org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(
>         at org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(
>         at org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(
>         at org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(
>         at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(
>         at
>         at
>         at$7.computeNext(
>         at
>         at
>         at org.apache.cassandra.db.compaction.CompactionTask.execute(
>         at org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(
>         at org.apache.cassandra.db.compaction.CompactionManager$
>         at org.apache.cassandra.db.compaction.CompactionManager$
>         at java.util.concurrent.FutureTask$Sync.innerRun(
>         at
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
>         at java.util.concurrent.ThreadPoolExecutor$
> This was Sylvain's analysis:
> I don't have much better news. Basically it seems the 2 last MB of the file are complete
garbage (which also explain the mmap error btw). And given where the corruption actually starts,
it suggests that it's either a very low level bug in our file writer code that start writting
bad data at some point for some reason, or it's corruption not related to Cassandra. But given
that, a Cassandra bug sounds fairly unlikely.
> You said that you saw that corruption more than once. Could you be more precise? In particular,
did you get it on different hosts? Also, what file system are you using?
> If you do happen to have another instance of a corrupted sstable (ideally from some other
host) that you can share, please don't hesitate. I could try to look if I find something common
between the two.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


View raw message