cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Héctor Izquierdo Seliva <izquie...@strands.com>
Subject Re: Corrupted data
Date Sat, 09 Jul 2011 05:37:25 GMT
Hi Aaron,

El vie, 08-07-2011 a las 14:47 -0700, aaron morton escribió:
> You may not lose data. 
> 
> - What version and whats the upgrade history?

all versions from 0.7.1 to 0.8.1. All cfs were in 0.8.1 format though

> - What RF / node count / CL  ?

RF=3, node count = 6
> - Have you been running repair consistently ?

Nop, only when something breaks

> - Is this on a single node or all nodes ?

A couple of nodes. Scrub told there were a few thousand of columns it
could not restore.
> 
> Cheers
> 
> -----------------
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
> 
> On 8 Jul 2011, at 09:38, Héctor Izquierdo Seliva wrote:
> 
> > Hi everyone,
> > 
> > I'm having thousands of these errors:
> > 
> > WARN [CompactionExecutor:1] 2011-07-08 16:36:45,705
> > CompactionManager.java (line 737) Non-fatal error reading row
> > (stacktrace follows)
> > java.io.IOError: java.io.IOException: Impossible row size
> > 6292724931198053
> > 	at
> > org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:719)
> > 	at
> > org.apache.cassandra.db.compaction.CompactionManager.doScrub(CompactionManager.java:633)
> > 	at org.apache.cassandra.db.compaction.CompactionManager.access
> > $600(CompactionManager.java:65)
> > 	at org.apache.cassandra.db.compaction.CompactionManager
> > $3.call(CompactionManager.java:250)
> > 	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> > 	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> > 	at java.util.concurrent.ThreadPoolExecutor
> > $Worker.runTask(ThreadPoolExecutor.java:886)
> > 	at java.util.concurrent.ThreadPoolExecutor
> > $Worker.run(ThreadPoolExecutor.java:908)
> > 	at java.lang.Thread.run(Thread.java:662)
> > Caused by: java.io.IOException: Impossible row size 6292724931198053
> > 	... 9 more
> > INFO [CompactionExecutor:1] 2011-07-08 16:36:45,705
> > CompactionManager.java (line 743) Retrying from row index; data is -8
> > bytes starting at 4735525245
> > WARN [CompactionExecutor:1] 2011-07-08 16:36:45,705
> > CompactionManager.java (line 767) Retry failed too.  Skipping to next
> > row (retry's stacktrace follows)
> > java.io.IOError: java.io.EOFException: bloom filter claims to be
> > 863794556 bytes, longer than entire row size -8
> > 
> > 
> > THis is during scrub, as I saw similar errors while in normal operation.
> > Is there anything I can do? It looks like I'm going to lose a ton of
> > data
> > 
> 



Mime
View raw message