See my comments inline
On Mon, Sep 24, 2012 at 10:02 AM, ÷¦ÔÁÌ¦Ê ôÉÍÞÉÛÉÎ <firstname.lastname@example.org> wrote:Not sure why a lot of files is a problem... modern filesystems deal
> Why so?
> What are pluses and minuses?
> As for me, I am looking for number of files in directory.
> 700GB/512MB*5(files per SST) = 7000 files, that is OK from my view.
> 700GB/5MB*5 = 700000 files, that is too much for single directory, too much
> memory used for SST data, too huge compaction queue (that leads to strange
> pauses, I suppose because of compactor thinking what to compact next),...
with that pretty well.
Really large sstables mean that compactions now are taking a lot more
disk IO and time to complete. š
Remember, Leveled Compaction is more
disk IO intensive, so using large sstables makes that even worse.
This is a big reason why the default is 5MB. Also, each level is 10x
the size as the previous level. šAlso, for level compaction, you need
10x the sstable size worth of free space to do compactions. šSo now
you need 5GB of free disk, vs 50MB of free disk.
Also, if you're doing deletes in those CF's, that old, deleted data is
going to stick around a LOT longer with 512MB files, because it can't
get deleted until you have 10x512MB files to compact to level 2.
Heaven forbid it doesn't get deleted then because each level is 10x
bigger so you end up waiting a LOT longer to actually delete that data
Now, if you're using SSD's then larger sstables is probably doable,
but even then I'd guesstimate 50MB is far more reasonable then 512MB.
> 2012/9/23 Aaron Turner <email@example.com>
>> On Sun, Sep 23, 2012 at 8:18 PM, ÷¦ÔÁÌ¦Ê ôÉÍÞÉÛÉÎ <firstname.lastname@example.org>
>> > If you think about space, use Leveled compaction! This won't only allow
>> > you
>> > to fill more space, but also will shrink you data much faster in case of
>> > updates. Size compaction can give you 3x-4x more space used than there
>> > are
>> > live data. Consider the following (our simplified) scenario:
>> > 1) The data is updated weekly
>> > 2) Each week a large SSTable is written (say, 300GB) after full update
>> > processing.
>> > 3) In 3 weeks you will have 1.2TB of data in 3 large SSTables.
>> > 4) Only after 4th week they all will be compacted into one 300GB
>> > SSTable.
>> > Leveled compaction've tamed space for us. Note that you should set
>> > sstable_size_in_mb to reasonably high value (it is 512 for us with
>> > ~700GB
>> > per node) to prevent creating a lot of small files.
>> 512MB per sstable? šWow, that's freaking huge. šFrom my conversations
>> with various developers 5-10MB seems far more reasonable. š I guess it
>> really depends on your usage patterns, but that seems excessive to me-
>> especially as sstables are promoted.
> Best regards,
> šVitalii Tymchyshyn
http://synfin.net/ š š š š Twitter: @synfinatic
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows
Those who would give up essential Liberty, to purchase a little temporary
Safety, deserve neither Liberty nor Safety.
š š -- Benjamin Franklin
"carpe diem quam minimum credula postero"