cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kjetil Valstadsve (JIRA)" <j...@apache.org>
Subject [jira] Commented: (CASSANDRA-1947) Cassandra dies on (presumed) bad row during compaction
Date Fri, 07 Jan 2011 14:23:45 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-1947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12978792#action_12978792
] 

Kjetil Valstadsve commented on CASSANDRA-1947:
----------------------------------------------

According to tjake (chatting on #cassandra), bad rows are readily detectable, but there hasn't
been a good, general way to handle and report them. 

For this, I would suggest a new property on the keyspace and/or column family, to indicate
whether loss of data is acceptable. If set to true, it would be acceptable for cassandra to
detect  the bad row, log it, remove it, and continue.

Our use case is data that are generated on demand and occasionally refreshed. The occasional
bad row wouldn't hurt us. 

Statistics on the history of bad row detection would be nice, though, of course.

> Cassandra dies on (presumed) bad row during compaction
> ------------------------------------------------------
>
>                 Key: CASSANDRA-1947
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1947
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>    Affects Versions: 0.7.0 rc 3
>            Reporter: Kjetil Valstadsve
>
> My Cassandra has, unfortunately, ended up with a bad row somewhere. This consistently
results in the following stacktrace and an abrupt death, shortly after (re)start:
> {code}
> ERROR [CompactionExecutor:1] 2011-01-06 12:47:56,057 AbstractCassandraDaemon.java Fatal
exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.OutOfMemoryError: Java heap space
> 	at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)
> 	at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
> 	at org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtilities.java:277)
> 	at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:94)
> 	at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:35)
> 	at org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:101)
> 	at org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:34)
> 	at org.apache.commons.collections.iterators.CollatingIterator.set(CollatingIterator.java:284)
> 	at org.apache.commons.collections.iterators.CollatingIterator.least(CollatingIterator.java:326)
> 	at org.apache.commons.collections.iterators.CollatingIterator.next(CollatingIterator.java:230)
> 	at org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:68)
> 	at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136)
> 	at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131)
> 	at com.google.common.collect.Iterators$7.computeNext(Iterators.java:604)
> 	at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136)
> 	at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131)
> 	at org.apache.cassandra.db.ColumnIndexer.serializeInternal(ColumnIndexer.java:76)
> 	at org.apache.cassandra.db.ColumnIndexer.serialize(ColumnIndexer.java:50)
> 	at org.apache.cassandra.io.LazilyCompactedRow.<init>(LazilyCompactedRow.java:88)
> 	at org.apache.cassandra.io.CompactionIterator.getCompactedRow(CompactionIterator.java:136)
> 	at org.apache.cassandra.io.CompactionIterator.getReduced(CompactionIterator.java:107)
> 	at org.apache.cassandra.io.CompactionIterator.getReduced(CompactionIterator.java:42)
> 	at org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:73)
> 	at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136)
> 	at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131)
> 	at org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
> 	at org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
> 	at org.apache.cassandra.db.CompactionManager.doCompaction(CompactionManager.java:323)
> 	at org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:122)
> 	at org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:92)
> 	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> {code}
> Looks to me like there should be a guard against bogus bytebuffer sizes, somewhere along
this path.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message