cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Harvey (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-5059) 1.0.11 -> 1.1.7 upgrade results in unusable compressed sstables
Date Thu, 13 Dec 2012 00:03:21 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13530511#comment-13530511
] 

Jason Harvey edited comment on CASSANDRA-5059 at 12/13/12 12:01 AM:
--------------------------------------------------------------------

Confirmed that this also occurs in my environment when upgrading a brand-new DeflateCompressor
CF.
                
      was (Author: alienth):
    Confirmed that this also occurs in my environment on a brand-new DeflateCompressor CF.
                  
> 1.0.11 -> 1.1.7 upgrade results in unusable compressed sstables
> ---------------------------------------------------------------
>
>                 Key: CASSANDRA-5059
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 1.1.7
>         Environment: ubuntu
> sun-java6 6.24-1build0.10.10.1
>            Reporter: Jason Harvey
>         Attachments: LastModified.tar
>
>
> Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors.
However, as soon as the node joined the ring, it started spewing this exception hundreds of
times a second:
> {code}
>  WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing
org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2
> java.io.IOException: Bad file descriptor
>         at sun.nio.ch.FileDispatcher.preClose0(Native Method)
>         at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59)
>         at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96)
>         at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
>         at java.io.FileInputStream.close(FileInputStream.java:258)
>         at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131)
>         at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121)
>         at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
>         at java.io.RandomAccessFile.close(RandomAccessFile.java:541)
>         at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224)
>         at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130)
>         at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132)
>         at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112)
>         at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300)
>         at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144)
>         at org.apache.cassandra.db.Table.getRow(Table.java:378)
>         at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
>         at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51)
>         at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> {code}
> The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back
and abandon the upgrade.
> Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed
CFs.
> After some digging on a test node, I've determined that the issue occurs when attempting
to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message