cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 郝加来 <>
Subject Re: Re: Too many open files Cassandra
Date Sat, 07 Nov 2015 02:52:33 GMT
many connection ?


From: Jason Lewis
Date: 2015-11-07 10:38
Subject: Re: Too many open files Cassandra
cat /proc/5980/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             2063522              2063522              processes
Max open files            100000               100000               files
Max locked memory         unlimited            unlimited            bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       2063522              2063522              signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

On Fri, Nov 6, 2015 at 4:01 PM, Sebastian Estevez <> wrote:

You probably need to configure ulimits correctly.

What does this give you?

/proc/<cassandra PID>/limits

All the best,

Sebastián Estévez
Solutions Architect | 954 905 8615 |

DataStax is the fastest, most scalable distributed database technology, delivering Apache
Cassandra to the world’s most innovative enterprises. Datastax is built to be agile, always-on,
and predictably scalable to any size. With more than 500 customers in 45 countries, DataStax
is the database technology and transactional backbone of choice for the worlds most innovative
companies such as Netflix, Adobe, Intuit, and eBay. 

On Fri, Nov 6, 2015 at 1:56 PM, Branton Davis <> wrote:

We recently went down the rabbit hole of trying to understand the output of lsof.  lsof -n
has a lot of duplicates (files opened by multiple threads).  Use 'lsof -p $PID' or 'lsof -u
cassandra' instead.

On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng <> wrote:

Is your compaction progressing as expected? If not, this may cause an excessive number of
tiny db files. Had a node refuse to start recently because of this, had to temporarily remove
limits on that process.

On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis <> wrote:

I'm getting too many open files errors and I'm wondering what the
cause may be.

lsof -n | grep java  show 1.4M files

~90k are inodes
~70k are pipes
~500k are cassandra services in /usr
~700K are the data files.

What might be causing so many files to be open?


Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies
your system. Thank you.
View raw message