That spreadsheet doesn't take compression into account, which is
very important in my case. Uncompressed, my data is going to require
a petabyte of storage according to the spreadsheet. I am pretty sure
I won't get that much storage to play with.
The spreadsheet also shows that Cassandra wastes unbelievable amount
of space on compaction. My experiments with LevelDB however show
that it is possible for write-optimized database to use negligible
compaction space. I am not sure how LevelDB does it. I guess it
splits the larger sstables into smaller chunks and merges them
Anyway, does anybody know how densely can I store the data with
Cassandra when compression is enabled? Would I have to implement
some smart adaptive grouping to fit lots of records in one row or is
there a simpler solution?
Dňa 4. 10. 2013 1:56 Andrey Ilinykh
wrote / napísal(a):