incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sylvain Lebresne <sylv...@datastax.com>
Subject Re: 0.7.3 nodetool scrub exceptions
Date Tue, 08 Mar 2011 20:45:43 GMT
Did you run scrub as soon as you updated to 0.7.3 ?

And did you had problems/exceptions before running scrub ?
If yes, did you had problems with only 0.7.3 or also with 0.7.2 ?

If the problems started with running scrub, since it takes a snapshot
before running, can you try restarting a test cluster with this snapshot
and see if a simple compaction work for instance.

--
Sylvain


On Tue, Mar 8, 2011 at 5:31 PM, Karl Hiramoto <karl@hiramoto.org> wrote:

> On 08/03/2011 17:09, Jonathan Ellis wrote:
>
>> No.
>>
>> What is the history of your cluster?
>>
> It started out as 0.7.0 - RC3     And I've upgraded 0.7.0, 0.7.1, 0.7.2,
> 0.7.3  within a few days after each was released.
>
> I have 6 nodes about 10GB of data each RF=2.   Only one CF every
> row/column has a TTL of 24 hours.
> I do a staggered  repair/compact/cleanup across every node in a cronjob.
>
>
> After upgrading to 0.7.3  I had a lot of nodes crashing due to OOM.     I
> reduced the key cache from the default 200000 to 1000 and increased the heap
> size from 8GB to 12GB and the OOM crashes went away.
>
>
> Anyway to fix this without throwing away all the data?
>
> Since i only keep data 24 hours,  I could insert into two CF for the next
> 24 hours than after only valid data in new CF remove the old CF.
>
>
>
>
>  On Tue, Mar 8, 2011 at 5:34 AM, Karl Hiramoto<karl@hiramoto.org>  wrote:
>>
>>> I have 1000's of these in the log  is this normal?
>>>
>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>> than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>        at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>        ... 8 more
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 625) Row is unreadable; skipping to next
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>> than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>        at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>>>        ... 8 more
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 625) Row is unreadable; skipping to next
>>>  WARN [CompactionExecutor:1] 2011-03-08 11:32:35,615
>>> CompactionManager.java
>>> (line 599) Non-fatal error reading row (stacktrace follows)
>>> java.io.IOError: java.io.EOFException: bloom filter claims to be longer
>>> than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>>>        at
>>>
>>> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>>>        at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>        at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>        at java.lang.Thread.run(Thread.java:636)
>>> Caused by: java.io.EOFException: bloom filter claims to be longer than
>>> entire row size
>>>        at
>>>
>>> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>>>        at org.apa
>>>
>>>
>>
>>
>

Mime
View raw message