incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jmodha <jmo...@gmail.com>
Subject Re: BulkLoading SSTables and compression
Date Mon, 02 Jul 2012 11:24:03 GMT
Thanks Sylvain.

I had a look at a node where we streamed data to and I do indeed see the
"..-CompressionInfo.db" files..

However, prior to running the "upgradesstables" command, the total size of
all the SSTables was 27GB and afterwards its 12GB.

So even though the CompressionInfo files were there immediately after bulk
loading the data, it wasn't really compressed..?

Can you think of anything else I can try to confirm this is indeed a bug?

Out of interest, we're not specifying a specific chunk size on the schema
(in the hope that it would just use the default of 64kb), so it reads
something like:

"create column family test
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'UTF8Type'
  and key_validation_class = 'BytesType'
  and compaction_strategy =
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
  and compression_options = {'sstable_compression' :
'org.apache.cassandra.io.compress.SnappyCompressor'};"

Would this cause any issues? 



--
View this message in context: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/BulkLoading-SSTables-and-compression-tp7580849p7580933.html
Sent from the cassandra-user@incubator.apache.org mailing list archive at Nabble.com.

Mime
View raw message