cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ron Siemens <rsiem...@greatergood.com>
Subject Re: Cassandra 1.1.0 NullCompressor and DecoratedKey errors
Date Fri, 18 May 2012 22:59:23 GMT

I decided to wipe cassandra clean and try again.  Haven't seen it again yet, but will report
if I do.  This may have been a symptom of having some previous data around, as my steps were:

1. shutdown and wipe data
2. run with NullCompressor
3. notice Cassandra complain compressor is not in package org.apache.cassandra.io
4. shutdown
5. move compressor to expected package
6. run with NullCOmpressor

Can't remember if I did another wipe after 4, so there may have been some data in a bad state?
 It seems client side didn't care what package the compressor was in, but service side did.

Unless I see the error again, I'm guessing there was some data leftover between trials.

Ron


On May 18, 2012, at 3:38 PM, Ron Siemens wrote:

> 
> We have some production Solaris boxes so I can't use SnappyCompressor (no library included
for Solaris), so I set it to JavaDeflate.  I've also noticed higher load issues with 1.1.0
versus 1.0.6: could this be JavaDeflate, or is that what the old default was?  Anyway, I thought
I would try no compression, since I found code like this in one of the issue discussions with
SnappyCompression.
> 
> public class NullCompressor implements ICompressor
> {
>    public static final NullCompressor instance = new NullCompressor();
> 
>    public static NullCompressor create( Map<String, String> compressionOptions
) {
>        return instance;
>    }
> 
>    public int initialCompressedBufferLength( int chunkLength ) {
>        return chunkLength;
>    }
> 
>    public int compress( byte[] input, int inputOffset, int inputLength, ICompressor.WrappedArray
output, int outputOffset ) throws IOException {
>        System.arraycopy( input, inputOffset, output.buffer, outputOffset, inputLength
);
>        return inputLength;
>    }
> 
>    public int uncompress( byte[] input, int inputOffset, int inputLength, byte[] output,
int outputOffset ) throws IOException {
>        System.arraycopy( input, inputOffset, output, outputOffset, inputLength );
>        return inputLength;
>    }
> 
> }
> 
> But now I get some curious errors in Cassandra log, that I haven't seen previously:
> 
> ERROR [ReadStage:294] 2012-05-18 15:33:40,039 AbstractCassandraDaemon.java (line 134)
Exception in thread Thread[ReadStage:294,5,main]
> java.lang.AssertionError: DecoratedKey(105946799083363489728328364782061531811, 57161d05b5000000040000b31300080000007e00000000000004c057161d05b6000000040000ae3800080000007f00000000000004c057161d05b70000000400008d6100080000008000000000000004c057161d05b8000000040000c10400080000008100000000000004c057161d05b90000000400008ac100080000008200000000000004c057161d05ba000000040000ae8b00080000008300000000000004c057161d05bb000000040000749500080000008400000000000004c057161d05bc000000040000bb2400080000008500000000000004c057161d05bd000000040000ba3200080000008600000000000004c057161d05be000000040000be9a00080000008700000000000004c057161d05bf000000040000b9fa00080000008800000000000004c057161d05c00000000400008e7f00080000008900000000000004c057161d05c1000000040000ba5900080000008a00000000000004c057161d05c2000000040000b64d00080000008b00000000000004c057161d05c3000000040000bae300080000008c00000000000004c057161d05c4000000040000bee500080000008d00000000000004c057161d05c5000000040000875900080000008e00000000000004c057161d05c6000000040000bad000080000008f00000000000004c057161d05c7000000040000badb00080000009000000000000004c057161d05c8000000040000bf1400080000009100000000000004c057161d05c9000000040000b7ec00080000009200000000000004c057161d05ca000000040000bace00080000009300000000000004c057161d05cb000000040000ba1700080000009400000000000004c057161d05cc00000004000084a100080000009500000000000004c057161d05cd000000040000956700080000009600000000000004c057161d05ce000000040000ab9000080000009700000000000004c057161d05cf000000040000b61100080000009800000000000004c057161d05d0000000040000af5500080000009900000000000004c057161d05d1000000040000abfc00080000009a00000000000004c057161d05d2000000040000bf3500080000009b00000000000004c057161d05d3000000040000bacd00080000009c00000000000004c057161d05d4000000040000bd0a00080000009d00000000000004c057161d05d5000000040000bac100080000009e00000000000004c057161d05d6000000040000af5300080000009f00000000000004c057161d05d7000000040000b97a0008000000a000000000000004c057161d05d8000000040000af130008000000a100000000000004c057161d05d9000000040000a25600085452535f3237313800000000000000430000001000000003000000010000000002000800000000004fb6cd4b0004c05716072b780000000100080000000100000001000004c05716072b79000000040000b87900074348535f3435360000000000000309000000180000000300000002be188961212f0cd18f5ddb69e0a336ed000000004fb6cd4f0004c057164b73f00000001b00080000000100000002000004c057164b73f100000004000079ba00080000000200000001000004c057164b73f2000000040000b43000080000000300000001000004c057164b73f3000000040000b462)
!= DecoratedKey(53124083656910387079795798648228312597, 5448535f323739) in /home/apollo/cassandra/data/ggstores3/ndx_items_category/ggstores3-ndx_items_category-hc-1-Data.db
> 	at org.apache.cassandra.db.columniterator.SSTableSliceIterator.<init>(SSTableSliceIterator.java:58)
> 	at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:66)
> 	at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:78)
> 	at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:233)
> 	at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:61)
> 	at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1273)
> 	at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1155)
> 	at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1090)
> 	at org.apache.cassandra.db.Table.getRow(Table.java:360)
> 	at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
> 	at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:816)
> 	at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1250)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> 	at java.lang.Thread.run(Thread.java:722)
> 
> It sure seems like my pass-through compressor triggered this.  Any thoughts?
> 
> Ron


Mime
View raw message