cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Ingalls <>
Subject Re: too many open files
Date Mon, 15 Jul 2013 07:00:35 GMT
I have one table that is using leveled.  It was set to 10MB, I will try changing it to 256MB.
 Is there a good way to merge the existing sstables?

On Jul 14, 2013, at 5:32 PM, Jonathan Haddad <> wrote:

> Are you using leveled compaction?  If so, what do you have the file size set at?  If
you're using the defaults, you'll have a ton of really small files.  I believe Albert Tobey
recommended using 256MB for the table sstable_size_in_mb to avoid this problem.
> On Sun, Jul 14, 2013 at 5:10 PM, Paul Ingalls <> wrote:
> I'm running into a problem where instances of my cluster are hitting over 450K open files.
 Is this normal for a 4 node 1.2.6 cluster with replication factor of 3 and about 50GB of
data on each node?  I can push the file descriptor limit up, but I plan on having a much larger
load so I'm wondering if I should be looking at something else….
> Let me know if you need more info…
> Paul
> -- 
> Jon Haddad
> skype: rustyrazorblade

View raw message