cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Haddad <>
Subject Re: too many open files
Date Mon, 15 Jul 2013 00:32:46 GMT
Are you using leveled compaction?  If so, what do you have the file size
set at?  If you're using the defaults, you'll have a ton of really small
files.  I believe Albert Tobey recommended using 256MB for the
table sstable_size_in_mb to avoid this problem.

On Sun, Jul 14, 2013 at 5:10 PM, Paul Ingalls <> wrote:

> I'm running into a problem where instances of my cluster are hitting over
> 450K open files.  Is this normal for a 4 node 1.2.6 cluster with
> replication factor of 3 and about 50GB of data on each node?  I can push
> the file descriptor limit up, but I plan on having a much larger load so
> I'm wondering if I should be looking at something else….
> Let me know if you need more info…
> Paul

Jon Haddad
skype: rustyrazorblade

View raw message