cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bill Mitchell (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-6721) READ-STAGE: IllegalArgumentException when re-reading wide row immediately upon creation
Date Wed, 19 Feb 2014 02:37:22 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13905043#comment-13905043
] 

Bill Mitchell edited comment on CASSANDRA-6721 at 2/19/14 2:37 AM:
-------------------------------------------------------------------

Generally my test case will drop the table and recreate it, to accommodate changes in the
schema as I experiment with compression, key organization, etc.  I do allow a few second delay
after the drop for it to clean up before the create table, to avoid some of the past known
issues with re-creating a table with the same name.  One would expect that dropping the table
would also clear the key cache.  

Several days ago, I did sometimes have to clear out the data directories, but those problems
seem to have gone away with the upgrade to 2.0.5 -- it cleans up better on restart.  


was (Author: wtmitchell3):
Generally my test case will drop the table and recreate it, to accommodate changes in the
schema as I experiment with compression, key organization, etc.  One would expect, though,
that dropping the table would also clear the key cache.  

Several days ago, I did sometimes have to clear out the data directories, but those problems
seem to have gone away with the upgrade to 2.0.5 -- it cleans up better on restart.  

> READ-STAGE: IllegalArgumentException when re-reading wide row immediately upon creation
 
> -----------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-6721
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6721
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Windows 7 x64 dual core, 8GB memory, single Cassandra node, Java
1.7.0_45
>            Reporter: Bill Mitchell
>         Attachments: 2014-02-15.txt, 2014-02-17-21-05.txt, 2014-02-17-22-05.txt, 2014-02-18-13-45.txt
>
>
> In my test case, I am writing a wide row to one table, ordering the columns in reverse
chronogical order, newest to oldest, by insertion time.  A simplified version of the schema:
> CREATE TABLE IF NOT EXISTS sr (s BIGINT, p INT, l BIGINT, ec TEXT, createDate TIMESTAMP,
k BIGINT, properties TEXT, PRIMARY KEY ((s, p, l), createDate, ec) ) WITH CLUSTERING ORDER
BY (createDate DESC) AND compression = {'sstable_compression' : 'LZ4Compressor'} 
> Intermittently, after inserting 1,000,000 or 10,000,000 or more rows, when my test immediately
turns around and tries to read this partition in its entirety, the client times out on the
read and the Cassandra log looks like the following:
> java.lang.RuntimeException: java.lang.IllegalArgumentException
> 	at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> 	at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.IllegalArgumentException
> 	at java.nio.Buffer.limit(Unknown Source)
> 	at org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:55)
> 	at org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:64)
> 	at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:82)
> 	at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
> 	at org.apache.cassandra.db.marshal.AbstractType$3.compare(AbstractType.java:77)
> 	at org.apache.cassandra.db.marshal.AbstractType$3.compare(AbstractType.java:74)
> 	at org.apache.cassandra.utils.MergeIterator$Candidate.compareTo(MergeIterator.java:152)
> 	at org.apache.cassandra.utils.MergeIterator$Candidate.compareTo(MergeIterator.java:129)
> 	at java.util.PriorityQueue.siftUpComparable(Unknown Source)
> 	at java.util.PriorityQueue.siftUp(Unknown Source)
> 	at java.util.PriorityQueue.offer(Unknown Source)
> 	at java.util.PriorityQueue.add(Unknown Source)
> 	at org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:90)
> 	at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> 	at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> 	at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> 	at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> 	at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> 	at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> 	at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1560)
> 	at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1379)
> 	at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
> 	at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> 	at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1396)
> 	at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
> 	... 3 more
> I have seen the same failure whether I use the LZ4Compressor or the SnappyCompressor,
so it is not dependent on the choice of compression. 
> When compression is disabled, the log is similar, differing slightly in the details.
 The exception is then:
>  java.io.IOError: java.io.IOException: mmap segment underflow; remaining is 10778639
but 876635247 requested
> At least in this case of no compression, although the read test failed when run immediately
after the data was written, running just the read tests again later succeeded.  Which suggests
this is a problem with a cached version of the data, as the underlying file itself is not
corrupted.
> The attached 2014-02-15 and 2014-02-17-21-05 files show the initial failure with LZ4Compressor.
 The 2014-02-17-22-05 file shows the log from the uncompressed test.
> In all of these, the log includes the message 
> CompactionController.java (line 192) Compacting large row testdb/sr:5:1:6 (1079784915
bytes) incrementally.  
> This may be coincidental, as it turns out, as I may be seeing the same issue on a table
with narrow rows and a large number of composite primary keys.  See the attached log 2014-02-18-13-45.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message