This is very useful. Thanks Aaron!
AFAIK if the entire row can be read into memory the compaction will be faster. The in_memory_compaction_limit_in_mb setting is used to decide how big the row can be before it has to use a slower two pass process.Also my understanding is that one of the main factors for compaction is the number of over-writes for rows / columns. e.g if the data for a row is spread over a lot of ss tables (for new columns and/or updates and/or deletes) it will take longer to compact that row.Hope that helps.Aaron
On 04 Dec, 2010,at 09:23 AM, Narendra Sharma <firstname.lastname@example.org> wrote:
What is the impact (performance and I/O) of row size (in bytes) on compaction?
What is the impact (performance and I/O) of number of super columns and columns on compaction?
Does anyone has any details and data to share?