cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Harvey (JIRA)" <j...@apache.org>
Subject [jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error
Date Thu, 10 Mar 2011 18:53:59 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13005239#comment-13005239
] 

Jason Harvey commented on CASSANDRA-2296:
-----------------------------------------

Here is the debug output. Going to get a comparison on the unpatched 0.7.3 to see if there
is any difference.

{code}
DEBUG 11:50:52,510 Reading row at 504216964
DEBUG 11:50:52,510 row 636f6d6d656e74735f706172656e74735f3233383135363235 is 66 bytes
DEBUG 11:50:52,510 Index doublecheck: row 636f6d6d656e74735f706172656e74735f3233383135363235
is 66 bytes
 INFO 11:50:52,511 Last written key : DecoratedKey(125686934811414729670440675125192621396,
627975726c2833626333626339353363353762313133373331336461303233396438303534312c66692e676f73757065726d6f64656c2e636f6d2f70726f66696c65732f2f6170706c65747265713d3132373333393332313937363529)
 INFO 11:50:52,511 Current key : DecoratedKey(11047858886149374835950241979723972473, 636f6d6d656e74735f706172656e74735f3233383135363235)
 INFO 11:50:52,511 Writing into file /var/lib/cassandra/data/reddit/permacache-tmp-f-168492-Data.db
 WARN 11:50:52,511 Non-fatal error reading row (stacktrace follows)
java.io.IOException: Keys must be written in ascending order.
        at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:111)
        at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:128)
        at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:598)
        at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
        at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:636)
{code}



> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3.
Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625)
Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599)
Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row
size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625)
Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637)
Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709
rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639)
Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub
snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if
any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message