cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mick Semb Wever <...@apache.org>
Subject Re: get_range_slices OOM on CompressionMetadata.readChunkOffsets(..)
Date Mon, 31 Oct 2011 10:35:10 GMT
On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
> >> I set chunk_length_kb to 16 as my rows are very skinny (typically 100b)
> >
> >
> > I see now this was a bad choice.
> > The read pattern of these rows is always in bulk so the chunk_length
> > could have been much higher so to reduce memory usage (my largest
> > sstable is 61G).
> >
> > After changing the ckunk_length is there any way to rebuild just some
> > sstables rather than having to do a full nodetool scrub ?
> 
> Provided you're using SizeTieredCompaction (i.e, the default), you can
> trigger a "user defined compaction" through JMX on each of the sstable
> you want to rebuild. Not necessarily a fun process though. Also note that
> you can scrub only an individual column family if that was the question. 

Actually this won't work i think.

I presume that scrub or any "user defined compaction" will still need to
SSTableReader.openDataReader(..) and so will still OOM no matter what...

How the hell am i supposed to re-chunk_length an sstable? :-(

~mck

-- 
"We all may have come on different ships, but we’re in the same boat
now." Martin Luther King. Jr. 

| http://semb.wever.org | http://sesat.no |
| http://tech.finn.no   | Java XSS Filter |


Mime
View raw message