incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Sanda <john.sa...@gmail.com>
Subject Re: sstable_compression for system tables
Date Fri, 03 May 2013 18:07:08 GMT
The machine where this error occurred had both OpenJDK and IBM's Java
installed. The only way I have been able to reproduce is by installing
Cassandra with OpenJDK, shutting it down, the starting it back up with IBM
Java. Snappy compression is enabled with OpenJDK so SSTables, including for
system tables, are created with compression. Then when I start Cassandra
back up with IBM Java, it cannot read those compressed files. This
situation only happened as a result of switching JREs which is not too
likely for a production deployment. When I install and deploy Cassandra
using IBM Java from the get go, tables are created with compression
disabled as expected.

System tables though are in fact created with snappy compression if it is
available.


On Fri, May 3, 2013 at 12:35 PM, John Sanda <john.sanda@gmail.com> wrote:

> I am still trying to sort this out. When I run with Oracle's JRE, it does
> in fact look like compression is enabled for system tables.
>
> cqlsh> DESCRIBE TABLE system.schema_columnfamilies ;
>
> CREATE TABLE schema_columnfamilies (
>   keyspace_name text,
>   columnfamily_name text,
>   bloom_filter_fp_chance double,
>   caching text,
>   column_aliases text,
>   comment text,
>   compaction_strategy_class text,
>   compaction_strategy_options text,
>   comparator text,
>   compression_parameters text,
>   default_read_consistency text,
>   default_validator text,
>   default_write_consistency text,
>   gc_grace_seconds int,
>   id int,
>   key_alias text,
>   key_aliases text,
>   key_validator text,
>   local_read_repair_chance double,
>   max_compaction_threshold int,
>   min_compaction_threshold int,
>   populate_io_cache_on_flush boolean,
>   read_repair_chance double,
>   replicate_on_write boolean,
>   subcomparator text,
>   type text,
>   value_alias text,
>   PRIMARY KEY (keyspace_name, columnfamily_name)
> ) WITH
>   bloom_filter_fp_chance=0.010000 AND
>   caching='KEYS_ONLY' AND
>   comment='ColumnFamily definitions' AND
>   dclocal_read_repair_chance=0.000000 AND
>   gc_grace_seconds=8640 AND
>   read_repair_chance=0.000000 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
>
>
> Unfortunately the Cassandra instance that was running with IBM Java was on
> a test virtual machine that has seen been deleted. Even if system tables
> are getting created with compression enabled, I do not see how that would
> happen with IBM Java where it fails to load the native snappy library.
> CFMetadata.DEFAULT_COMPRESSOR should be null when snappy is not available.
>
>
> On Fri, May 3, 2013 at 11:00 AM, Edward Capriolo <edlinuxguru@gmail.com>wrote:
>
>> I did not know the system tables were compressed. That would seem like an
>> odd decision you would think that the system tables are small and would not
>> benefit from compression much. Is it a static object static object that
>> requires initialization even though it is not used?
>>
>>
>> On Fri, May 3, 2013 at 10:19 AM, John Sanda <john.sanda@gmail.com> wrote:
>>
>>> Is there a way to change the sstable_compression for system tables? I am
>>> trying to deploy Cassandra 1.2.2 on a platform with IBM Java and 32 bit
>>> arch where the snappy-java native library fails to load. The error I get
>>> looks like,
>>>
>>> ERROR [SSTableBatchOpen:1] 2013-05-02 14:42:42,485 CassandraDaemon.java (line
132) Exception in thread Thread[SSTableBatchOpen:1,5,main]
>>> java.lang.RuntimeException: Cannot create CompressionParameters for stored parameters
>>>         at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:99)
>>>         at org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:63)
>>>         at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:51)
>>>         at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:404)
>>>         at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:198)
>>>         at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:149)
>>>         at org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:238)
>>>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:482)
>>>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:345)
>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:177)
>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1156)
>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:626)
>>>         at java.lang.Thread.run(Thread.java:780)
>>> Caused by: org.apache.cassandra.exceptions.ConfigurationException: SnappyCompressor.create()
threw an error: java.lang.NoClassDefFoundError org.xerial.snappy.Snappy (initialization failure)
>>>         at org.apache.cassandra.io.compress.CompressionParameters.createCompressor(CompressionParameters.java:179)
>>>         at org.apache.cassandra.io.compress.CompressionParameters.<init>(CompressionParameters.java:71)
>>>         at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:95)
>>>         ... 12 more
>>> Caused by: java.lang.reflect.InvocationTargetException
>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
>>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
>>>         at java.lang.reflect.Method.invoke(Method.java:613)
>>>         at org.apache.cassandra.io.compress.CompressionParameters.createCompressor(CompressionParameters.java:156)
>>>         ... 14 more
>>> Caused by: java.lang.NoClassDefFoundError: org.xerial.snappy.Snappy (initialization
failure)
>>>         at java.lang.J9VMInternals.initialize(J9VMInternals.java:176)
>>>         at org.apache.cassandra.io.compress.SnappyCompressor.create(SnappyCompressor.java:45)
>>>         ... 19 more
>>> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
>>>         at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:229)
>>>         at org.xerial.snappy.Snappy.<clinit>(Snappy.java:44)
>>>         at java.lang.J9VMInternals.initializeImpl(Native Method)
>>>         at java.lang.J9VMInternals.initialize(J9VMInternals.java:236)
>>>         at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:150)
>>>         at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:366)
>>>         at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:409)
>>>
>>>
>>> I am not able to change sstable_compression for system tables from
>>> either cassandra-cli or from cqlsh. I should point out that the DataStax
>>> docs do state that system tables cannot be altered. I was wondering though
>>> if there might be another way to do so.
>>>
>>> Simply not using IBM Java is not an option for me. There is already an
>>> issue[1] open with the snappy-java project that I think will address this
>>> issue; however, that would involve packaging a new version of snappy-java
>>> with Cassandra (when the fix is available). I would like to better
>>> understand the impact of switching to a patched and/or upgraded version of
>>> snappy-java before making that change.
>>>
>>> [1] https://github.com/xerial/snappy-java/issues/34
>>>
>>> Thanks
>>>
>>> - John
>>>
>>
>>
>
>
> --
>
> - John
>



-- 

- John

Mime
View raw message