cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stefania (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading
Date Thu, 02 Jul 2015 09:51:04 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14611557#comment-14611557
] 

Stefania edited comment on CASSANDRA-9686 at 7/2/15 9:50 AM:
-------------------------------------------------------------

Using Andrea's compactions_in_progress sstable files I can reproduce the exception in *2.1.7*
regardless of heap size and on Linux 64bit:

{code}
ERROR 05:51:50 Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 0 chunks encountered:
java.io.DataInputStream@4854d57
        at org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:205)
~[main/:na]
        at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:127)
~[main/:na]
        at org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
~[main/:na]
        at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
~[main/:na]
        at org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
~[main/:na]
        at org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
~[main/:na]
        at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:721) ~[main/:na]
        at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:676) ~[main/:na]
        at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:482) ~[main/:na]
        at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:381) ~[main/:na]
        at org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:519) ~[main/:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_45]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_45]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
~[na:1.8.0_45]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_45]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.io.IOException: Compressed file with 0 chunks encountered: java.io.DataInputStream@4854d57
        at org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:183)
~[main/:na]
        ... 15 common frames omitted
{code}

Aside from the LEAK errors for which we have a patch, it's very much the same issue as CASSANDRA-8192.
The following files contain only zeros:

xxd -p system-compactions_in_progress-ka-6866-CompressionInfo.db
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000

xxd -p system-compactions_in_progress-ka-6866-Digest.sha1   
00000000000000000000

xxd -p system-compactions_in_progress-ka-6866-TOC.txt
000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000
000000000000000000

The other files contain some data. I have no idea how they got to become like this. [~Andie78]
do you see any assertion failures or other exceptions in the log files before the upgrade?
Do you do any offline operations on the files at all? And how do you stop the process normally?




was (Author: stefania):
Using Andrea's compactions_in_progress sstable files I can reproduce the exception in *2.1.7*
regardless of heap size and on Linux 64bit:

{code}
ERROR 05:51:50 Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 0 chunks encountered:
java.io.DataInputStream@4854d57
        at org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:205)
~[main/:na]
        at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:127)
~[main/:na]
        at org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
~[main/:na]
        at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
~[main/:na]
        at org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
~[main/:na]
        at org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
~[main/:na]
        at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:721) ~[main/:na]
        at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:676) ~[main/:na]
        at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:482) ~[main/:na]
        at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:381) ~[main/:na]
        at org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:519) ~[main/:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_45]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_45]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
~[na:1.8.0_45]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_45]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.io.IOException: Compressed file with 0 chunks encountered: java.io.DataInputStream@4854d57
        at org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:183)
~[main/:na]
        ... 15 common frames omitted
{code}

Aside from the LEAK errors, which are only in 2.2 and for which we have a patch, it's very
much the same issue as CASSANDRA-8192. The following files contain only zeros:

xxd -p system-compactions_in_progress-ka-6866-CompressionInfo.db
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000

xxd -p system-compactions_in_progress-ka-6866-Digest.sha1   
00000000000000000000

xxd -p system-compactions_in_progress-ka-6866-TOC.txt
000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000
000000000000000000

The other files contain some data. I have no idea how they got to become like this. [~Andie78]
do you see any assertion failures or other exceptions in the log files before the upgrade?
Do you do any offline operations on the files at all? And how do you stop the process normally?



> FSReadError and LEAK DETECTED after upgrading
> ---------------------------------------------
>
>                 Key: CASSANDRA-9686
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
>            Reporter: Andreas Schnitzerling
>            Assignee: Stefania
>             Fix For: 2.2.x
>
>         Attachments: cassandra.bat, cassandra.yaml, compactions_in_progress.zip, sstable_activity.zip,
system.log
>
>
> After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and LEAK DETECTED
on start. Deleting the listed files, the failure goes away.
> {code:title=system.log}
> ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 DebuggableThreadPoolExecutor.java:242
- Error in ThreadPoolExecutor
> org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file with 0 chunks
encountered: java.io.DataInputStream@1c42271
> 	at org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:117)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:1.7.0_55]
> 	at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.7.0_55]
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.7.0_55]
> 	at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
> Caused by: java.io.IOException: Compressed file with 0 chunks encountered: java.io.DataInputStream@1c42271
> 	at org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
> 	... 15 common frames omitted
> ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK DETECTED: a reference
(org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
was not released before the reference was garbage collected
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message