Nope. I think at least once a week I hear someone suggest one way to solve their problem is to "write an sstablesplit tool".
Does the "sstablesplit" function exists somewhere ?
2012/11/10 Jim Cistaro <firstname.lastname@example.org>For some of our clusters, we have taken the periodic major compaction
There are a few things to consider:
1) Once you start major compacting, depending on data size, you may be
committed to doing it periodically because you create one big file that
will take forever to naturally compact agaist 3 like sized files.
2) If you rely heavily on file cache (rather than large row caches), each
major compaction effectively invalidates the entire file cache beause
everything is written to one new large file.
On 11/9/12 11:27 AM, "Rob Coli" <email@example.com> wrote:
>On Thu, Nov 8, 2012 at 10:12 AM, B. Todd Burruss <firstname.lastname@example.org> wrote:
>> my question is would leveled compaction help to get rid of the
>> data faster than size tiered, and therefore reduce the disk space usage?
>You could also...
>1) run a major compaction
>2) code up sstablesplit
>This method incurs a management penalty if not automated, but is
>otherwise the most efficient way to deal with tombstones and obsolete
>AIM>ALK - email@example.com
>YAHOO - rcoli.palominob
>SKYPE - rcoli_palominodb