Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 77AEB21F3 for ; Mon, 25 Apr 2011 16:47:53 +0000 (UTC) Received: (qmail 66462 invoked by uid 500); 25 Apr 2011 16:47:51 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 66430 invoked by uid 500); 25 Apr 2011 16:47:51 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 66422 invoked by uid 99); 25 Apr 2011 16:47:51 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 25 Apr 2011 16:47:51 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,RFC_ABUSE_POST,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of tmarthinussen@gmail.com designates 74.125.82.44 as permitted sender) Received: from [74.125.82.44] (HELO mail-ww0-f44.google.com) (74.125.82.44) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 25 Apr 2011 16:47:46 +0000 Received: by wwa36 with SMTP id 36so2233533wwa.25 for ; Mon, 25 Apr 2011 09:47:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=x7nyWO61d0b2GQiC+Yrf1B2EZVUWvj1FH+Ter4a7Chc=; b=PwQa5CtchJoRg0PjptgRtZdKvIDdePV0DkMrtstEPlJYE5QB2D6aFf2rLTA8tfjW2b 2plzyrGiSsdqRWnYJ3v4ybWd53RteZylm9Ow0/vqpdZ920npFoBL+SbHG4c2kP8J328t jg9JvNkbspwNi87/QjG7w7FvIEu5EhyPZK/S4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=tUOC8lBjqfud//3MdHWMtmeqWGewTzWPLq5DCnzdozXGv81Z7KjwvDSrG99eQ4ZSEC XobHecz3SvutsVBKYhENAhiQm2ALqfrfqQkKZESkZtzviFEDI/EtOUIW0rPZbEDoH5o6 QrIt7C/nUMsEqN1VJee0JXHILZg6kb5c0LcSo= MIME-Version: 1.0 Received: by 10.216.157.146 with SMTP id o18mr3827182wek.109.1303750045145; Mon, 25 Apr 2011 09:47:25 -0700 (PDT) Received: by 10.216.0.67 with HTTP; Mon, 25 Apr 2011 09:47:25 -0700 (PDT) In-Reply-To: References: Date: Tue, 26 Apr 2011 01:47:25 +0900 Message-ID: Subject: Re: 0.7.4 Bad sstables? From: Terje Marthinussen To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=0016e649c86c511cd204a1c0f6a1 --0016e649c86c511cd204a1c0f6a1 Content-Type: text/plain; charset=ISO-8859-1 I have been hunting similar looking corruptions, especially in the hints column family, but I believe it occurs somewhere while compacting. I looked in greater detail on one sstable and the row length was longer than the actual data in the row, and as far as I could see, either the length was wrong or the row was missing data as there was was no extra data in the row after the last column. This was however on a somewhat aging dataset, so suspected it could be related to 2376. Playing around with 0.8 at the moment and not seen it there yet.... (bet it will show up tomorrow once I wrote that.. :)) Terje On Tue, Apr 26, 2011 at 12:44 AM, Sanjeev Kulkarni wrote: > Hi Sylvain, > I started it from 0.7.4 with the patch 2376. No upgrade. > Thanks! > > > On Mon, Apr 25, 2011 at 7:48 AM, Sylvain Lebresne wrote: > >> Hi Sanjeev, >> >> What's the story of the cluster ? Did you started with 0.7.4, or is it >> upgraded from >> some earlier version ? >> >> On Mon, Apr 25, 2011 at 5:54 AM, Sanjeev Kulkarni >> wrote: >> > Hey guys, >> > Running a one node cassandra server with version 0.7.4 patched >> > with https://issues.apache.org/jira/browse/CASSANDRA-2376 >> > The system was running fine for a couple of days when we started >> noticing >> > something strange with cassandra. I stopped all applications and >> restarted >> > cassandra. And then did a scrub. During scrub, I noticed these in the >> logs >> > WARN [CompactionExecutor:1] 2011-04-24 23:37:07,561 >> CompactionManager.java >> > (line 607) Non-fatal error reading row (stacktrace follows) >> > java.io.IOError: java.io.IOException: Impossible row size >> > 1516029079813320210 >> > at >> > >> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:589) >> > at >> > >> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56) >> > at >> > >> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195) >> > at >> > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) >> at >> > java.util.concurrent.FutureTask.run(FutureTask.java:138) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) >> > at java.lang.Thread.run(Thread.java:662) >> > Caused by: java.io.IOException: Impossible row size 1516029079813320210 >> > ... 8 more >> > INFO [CompactionExecutor:1] 2011-04-24 23:37:07,640 >> CompactionManager.java >> > (line 613) Retrying from row index; data is -1768177699 bytes starting >> at >> > 2626524914 >> > WARN [CompactionExecutor:1] 2011-04-24 23:37:07,641 >> CompactionManager.java >> > (line 633) Retry failed too. Skipping to next row (retry's stacktrace >> > follows) >> > java.io.IOError: java.io.EOFException: bloom filter claims to be >> 1868982636 >> > bytes, longer than entire row size -1768177699 at >> > >> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:117) >> > at >> > >> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:618) >> > at >> > >> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56) >> > at >> > >> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195) >> > at >> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) >> > at java.util.concurrent.FutureTask.run(FutureTask.java:138) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) >> > at java.lang.Thread.run(Thread.java:662) >> > Caused by: java.io.EOFException: bloom filter claims to be 1868982636 >> bytes, >> > longer than entire row size -1768177699 at >> > >> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:116) >> > at >> > >> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:87) >> > ... 8 more >> > WARN [CompactionExecutor:1] 2011-04-24 23:37:16,545 >> CompactionManager.java >> > (line 607) Non-fatal error reading row (stacktrace follows) >> > java.io.IOError: java.io.EOFException >> > at >> > >> org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:144) >> > at >> > >> org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:40) >> > at >> > >> org.apache.commons.collections.iterators.CollatingIterator.set(CollatingIterator.java:284) >> > at >> > >> org.apache.commons.collections.iterators.CollatingIterator.least(CollatingIterator.java:326) >> > at >> > >> org.apache.commons.collections.iterators.CollatingIterator.next(CollatingIterator.java:230) >> > at >> > >> org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:68) >> > at >> > >> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136) >> > at >> > >> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131) >> > at >> > com.google.common.collect.Iterators$7.computeNext(Iterators.java:604) >> > at >> > >> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136) >> > at >> > >> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131) >> > at >> > >> org.apache.cassandra.db.ColumnIndexer.serializeInternal(ColumnIndexer.java:76) >> > at >> > org.apache.cassandra.db.ColumnIndexer.serialize(ColumnIndexer.java:50) >> > at >> > >> org.apache.cassandra.io.LazilyCompactedRow.(LazilyCompactedRow.java:90) >> > at >> > >> org.apache.cassandra.db.CompactionManager.getCompactedRow(CompactionManager.java:778) >> > at >> > >> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:591) >> > at >> > >> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56) >> > at >> > >> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195) >> > at >> > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) >> > at java.util.concurrent.FutureTask.run(FutureTask.java:138) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) >> > at java.lang.Thread.run(Thread.java:662) >> > Caused by: java.io.EOFException >> > at java.io.RandomAccessFile.readFully(RandomAccessFile.java:383) >> > at java.io.RandomAccessFile.readFully(RandomAccessFile.java:361) >> > at >> > >> org.apache.cassandra.io.util.BufferedRandomAccessFile.readBytes(BufferedRandomAccessFile.java:270) >> > at >> > org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:317) >> > at >> > >> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:273) >> > at >> > >> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:94) >> > at >> > >> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:35) >> > at >> > >> org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableIdentityIterator.java:140) >> > ... 22 more >> > INFO [CompactionExecutor:1] 2011-04-24 23:37:16,561 >> CompactionManager.java >> > (line 613) Retrying from row index; data is 78540539 bytes starting at >> > 2229643127 >> > >> > And then when i restarted the readers, i get the following crash >> > ERROR [ReadStage:24] 2011-04-24 23:43:05,658 >> AbstractCassandraDaemon.java >> > (line 112) Fatal exception in thread Thread[ReadStage:24,5,main] >> > java.lang.AssertionError: mmap segment underflow; remaining is 791462117 >> but >> > 1970433058 requested >> > at >> > >> org.apache.cassandra.io.util.MappedFileDataInput.readBytes(MappedFileDataInput.java:119) >> > at >> > org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:317) >> > at >> > >> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:273) >> > at >> > >> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:94) >> > at >> > >> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:35) >> > at >> > >> org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:181) >> > at >> > >> org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:121) >> > at >> > >> org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:49) >> > at >> > >> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136) >> > at >> > >> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131) >> > at >> > >> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:108) >> > at >> > >> org.apache.commons.collections.iterators.CollatingIterator.set(CollatingIterator.java:283) >> > at >> > >> org.apache.commons.collections.iterators.CollatingIterator.least(CollatingIterator.java:326) >> > at >> > >> org.apache.commons.collections.iterators.CollatingIterator.next(CollatingIterator.java:230) >> > at >> > >> org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:68) >> > at >> > >> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136) >> > at >> > >> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131) >> > at >> > >> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:116) >> > at >> > >> org.apache.cassandra.db.filter.QueryFilter.collectCollatedColumns(QueryFilter.java:130) >> > at >> > >> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1368) >> > at >> > >> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1245) >> > at >> > >> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1173) >> > at org.apache.cassandra.db.Table.getRow(Table.java:333) >> > at >> > >> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:63) >> > at >> > >> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:453) >> > at >> > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) >> > at java.lang.Thread.run(Thread.java:662) >> > >> > Any ideas? >> > Thanks! >> > > --0016e649c86c511cd204a1c0f6a1 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable I have been hunting similar looking corruptions, especially in the hints co= lumn family, but I believe it occurs somewhere while =A0compacting.
I looked in greater detail on one sstable and the row length wa= s longer than the actual data in the row, and as far as I could see, either= the length was wrong or the row was missing data as there was was no extra= data in the row after the last column.

This was however on a somewhat aging dataset, so suspec= ted it could be related to 2376.

Playing around with= 0.8 at the moment and not seen it there yet.... (bet it will show up tomor= row once I wrote that.. :))

Terje

On Tue, Ap= r 26, 2011 at 12:44 AM, Sanjeev Kulkarni <sanjeev@locomatix.com> wrote:
Hi Sylvain,
I started it from 0.7.4 with the patch 2376. No upgra= de.
Thanks!


On Mon, Apr 25, 2011 at 7:48 AM, Sylvain Lebresne <sylvain@dat= astax.com> wrote:
Hi Sanjeev,

What's the story of the cluster ? Did you started with 0.7.4, or is it<= br> upgraded from
some earlier version ?

On Mon, Apr 25, 2011 at 5:54 AM, Sanjeev Kulkarni <sanjeev@locomatix.com> wrote:<= br> > Hey guys,
> Running a one node cassandra server with version 0.7.4 patched
> with=A0https://issues.apache.org/jira/browse/CASSANDRA-2376
> The system was running fine for a couple of days when we started notic= ing
> something strange with cassandra. I stopped all applications and resta= rted
> cassandra. And then did a scrub. During scrub, I noticed these in the = logs
> WARN [CompactionExecutor:1] 2011-04-24 23:37:07,561 CompactionManager.= java
> (line 607) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.IOException: Impossible row size
> 1516029079813320210
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.ja= va:589)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager= .java:56)
> =A0 =A0 =A0 =A0at
> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.jav= a:195)
> =A0=A0 =A0 =A0 =A0at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) =A0= =A0 =A0 =A0at
> java.util.concurrent.FutureTask.run(FutureTask.java:138)
> =A0=A0 =A0 =A0 =A0at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecu= tor.java:886)
> =A0 =A0 =A0 =A0at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.= java:908)
> =A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Impossible row size 151602907981332021= 0
> =A0 =A0... 8 more
> =A0INFO [CompactionExecutor:1] 2011-04-24 23:37:07,640 CompactionManag= er.java
> (line 613) Retrying from row index; data is -1768177699 bytes starting= at
> 2626524914
> =A0WARN [CompactionExecutor:1] 2011-04-24 23:37:07,641 CompactionManag= er.java
> (line 633) Retry failed too. =A0Skipping to next row (retry's stac= ktrace
> follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be 18689= 82636
> bytes, longer than entire row size -1768177699 =A0 =A0 =A0 =A0at
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(S= STableIdentityIterator.java:117)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.ja= va:618)
> =A0 =A0 =A0 =A0at
> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager= .java:56)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.jav= a:195)
> =A0 =A0 =A0 =A0at java.util.concurrent.FutureTask$Sync.innerRun(Future= Task.java:303)
> =A0=A0 =A0 =A0 =A0at java.util.concurrent.FutureTask.run(FutureTask.ja= va:138)
> =A0at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecu= tor.java:886)
> =A0=A0 =A0 =A0 =A0at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.= java:908)
> =A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be 1868982636 = bytes,
> longer than entire row size -1768177699 =A0 =A0 =A0 =A0at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexH= elper.java:116)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(S= STableIdentityIterator.java:87)
> =A0 =A0 =A0 =A0... 8 more
> WARN [CompactionExecutor:1] 2011-04-24 23:37:16,545 CompactionManager.= java
> (line 607) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableId= entityIterator.java:144)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableId= entityIterator.java:40)
> =A0=A0 =A0 =A0 =A0at
> org.apache.commons.collections.iterators.CollatingIterator.set(Collati= ngIterator.java:284)
> =A0=A0 =A0 =A0 =A0at
> org.apache.commons.collections.iterators.CollatingIterator.least(Colla= tingIterator.java:326)
> =A0=A0 =A0 =A0 =A0at
> org.apache.commons.collections.iterators.CollatingIterator.next(Collat= ingIterator.java:230)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterat= or.java:68)
> =A0=A0 =A0 =A0 =A0at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIt= erator.java:136)
> =A0=A0 =A0 =A0 =A0at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.ja= va:131)
> =A0=A0 =A0 =A0 =A0at
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:604)<= br> > =A0=A0 =A0 =A0 =A0at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIt= erator.java:136)
> =A0=A0 =A0 =A0 =A0at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.ja= va:131)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.ColumnIndexer.serializeInternal(ColumnIndexer.= java:76)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.ColumnIndexer.serialize(ColumnIndexer.java:50)=
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.io.LazilyCompactedRow.<init>(LazilyCompacte= dRow.java:90)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.CompactionManager.getCompactedRow(CompactionMa= nager.java:778)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.ja= va:591)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager= .java:56)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.jav= a:195)
> =A0=A0 =A0 =A0 =A0at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> =A0=A0 =A0 =A0 =A0at java.util.concurrent.FutureTask.run(FutureTask.ja= va:138)
> =A0=A0 =A0 =A0 =A0at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecu= tor.java:886)
> =A0=A0 =A0 =A0 =A0at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.= java:908)
> =A0=A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException
> =A0=A0 =A0 =A0 =A0at java.io.RandomAccessFile.readFully(RandomAccessFi= le.java:383)
> =A0=A0 =A0 =A0 =A0at java.io.RandomAccessFile.readFully(RandomAccessFi= le.java:361)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.io.util.BufferedRandomAccessFile.readBytes(Buffer= edRandomAccessFile.java:270)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:317= )
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUti= l.java:273)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.= java:94)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.= java:35)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.next(SSTableId= entityIterator.java:140)
> =A0=A0 =A0 =A0 =A0... 22 more
> =A0INFO [CompactionExecutor:1] 2011-04-24 23:37:16,561 CompactionManag= er.java
> (line 613) Retrying from row index; data is 78540539 bytes starting at=
> 2229643127
>
> And then when i restarted the readers, i get the following crash
> ERROR [ReadStage:24] 2011-04-24 23:43:05,658 AbstractCassandraDaemon.j= ava
> (line 112) Fatal exception in thread Thread[ReadStage:24,5,main]
> java.lang.AssertionError: mmap segment underflow; remaining is 7914621= 17 but
> 1970433058 requested
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.io.util.MappedFileDataInput.readBytes(MappedFileD= ataInput.java:119)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:317= )
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUti= l.java:273)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.= java:94)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.= java:35)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlock= Fetcher.getNextBlock(IndexedSliceReader.java:181)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(= IndexedSliceReader.java:121)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(= IndexedSliceReader.java:49)
> =A0=A0 =A0 =A0 =A0at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIt= erator.java:136)
> =A0=A0 =A0 =A0 =A0at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.ja= va:131)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SS= TableSliceIterator.java:108)
> =A0=A0 =A0 =A0 =A0at
> org.apache.commons.collections.iterators.CollatingIterator.set(Collati= ngIterator.java:283)
> =A0=A0 =A0 =A0 =A0at
> org.apache.commons.collections.iterators.CollatingIterator.least(Colla= tingIterator.java:326)
> =A0=A0 =A0 =A0 =A0at
> org.apache.commons.collections.iterators.CollatingIterator.next(Collat= ingIterator.java:230)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterat= or.java:68)
> =A0=A0 =A0 =A0 =A0at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIt= erator.java:136)
> =A0=A0 =A0 =A0 =A0at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.ja= va:131)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(= SliceQueryFilter.java:116)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.filter.QueryFilter.collectCollatedColumns(Quer= yFilter.java:130)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFam= ilyStore.java:1368)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamily= Store.java:1245)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamily= Store.java:1173)
> =A0=A0 =A0 =A0 =A0at org.apache.cassandra.db.Table.getRow(Table.java:3= 33)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadComma= nd.java:63)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThro= w(StorageProxy.java:453)
> =A0=A0 =A0 =A0 =A0at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30= )
> =A0=A0 =A0 =A0 =A0at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecu= tor.java:886)
> =A0=A0 =A0 =A0 =A0at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.= java:908)
> =A0=A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:662)
>
> Any ideas?
> Thanks!


--0016e649c86c511cd204a1c0f6a1--