cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sylvain Lebresne (JIRA)" <j...@apache.org>
Subject [jira] Commented: (CASSANDRA-2195) java.lang.RuntimeException: java.lang.NegativeArraySizeException
Date Tue, 22 Feb 2011 12:32:38 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-2195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12997763#comment-12997763
] 

Sylvain Lebresne commented on CASSANDRA-2195:
---------------------------------------------

HB, since it is a test node, do you mind trying the patch attached to CASSANDRA-2216, force
a compaction again and check if you can reproduce.

As for your json2sstable problem, this is just due to too many open files. Not sure it is
justified that it opens so many files, many json2sstable leaks file descriptor, but ni any
case this is not related to a potential corruption problem (and if needed you can probably
have it works by increasing the allowing number of open file using ulimit). But right now,
my money is on CASSANDRA-2216.

> java.lang.RuntimeException: java.lang.NegativeArraySizeException
> ----------------------------------------------------------------
>
>                 Key: CASSANDRA-2195
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2195
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.2
>         Environment: Debian Lenny, Pelops-based servlet doing lots of List<Column>
columns = selector.getColumnsFromRow(columnFamily, key, false, ConsistencyLevel.ONE); and
mutator.writeColumns(columnFamily, key, mutator.newColumnList(...); mutator.execute(ConsistencyLevel.ANY);
operations.
>            Reporter: HB
>            Assignee: Stu Hood
>            Priority: Blocker
>             Fix For: 0.7.3
>
>
> When putting my 0.7.2 node under load, I get a large amount of these: 
> ERROR 15:33:25,075 Fatal exception in thread Thread[MutationStage:290,5,main]
> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NegativeArraySizeException
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>         at org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>         at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:108)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:106)
>         at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:71)
>         at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1275)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1167)
>         at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1095)
>         at org.apache.cassandra.db.Table.readCurrentIndexedColumns(Table.java:510)
>         at org.apache.cassandra.db.Table.apply(Table.java:445)
>         at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:190)
>         at org.apache.cassandra.service.StorageProxy$2.runMayThrow(StorageProxy.java:283)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         ... 3 more
> On recommendation of driftx I forced a compaction, which finished. After a restart, the
-Compacted files where removed and the node seemed to start up, querying some random rows
seemed to go alright but after a few minutes I started getting the above messages again. I'm
grabbing single rows, not slices.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message