cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (JIRA)" <j...@apache.org>
Subject [jira] Commented: (CASSANDRA-2296) Scrub resulting in "bloom filter claims to be longer than entire row size" error
Date Wed, 09 Mar 2011 03:21:59 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004350#comment-13004350
] 

Jonathan Ellis commented on CASSANDRA-2296:
-------------------------------------------

Scrub writes a zero-length row when tombstones expire and there is nothing left, instead of
writing no row at all.  So, as the clock rolls forwards and more tombstones expire, you will
usually get a few more zero-length rows written, that will be cleaned out by the next scrub.

> Scrub resulting in "bloom filter claims to be longer than entire row size" error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jason Harvey
>         Attachments: sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 0.7.3.
Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java (line 625)
Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java (line 599)
Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than entire row
size
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire row size
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java (line 625)
Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 637)
Scrub of SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 254709
rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java (line 639)
Unable to recover 1630 that were skipped.  You can attempt manual recovery from the pre-scrub
snapshot.  You can also run nodetool repair to transfer the data from a healthy replica, if
any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message