incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Morton <aa...@thelastpickle.com>
Subject Re: Getting into Too many open files issues
Date Tue, 12 Nov 2013 04:00:36 GMT
> Some reason with in less than an hour cassandra node is opening 32768 files and cassandra
is not responding after that. 
Are you using Levelled Compaction ? 
Is so what value did you set for min_sstable_size ? The default has changed from 5 to 160.


Increasing the file handles is the right thing to do but 32K files is a lot. 

Cheers

-----------------
Aaron Morton
New Zealand
@aaronmorton

Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

On 8/11/2013, at 8:09 am, Arindam Barua <abarua@247-inc.com> wrote:

>  
> I see 100 000 recommended in the Datastax documentation for thenofile limit since Cassandra
1.2 :
>  
> http://www.datastax.com/documentation/cassandra/2.0/webhelp/cassandra/install/installRecommendSettings.html
>  
> -Arindam
>  
> From: Pieter Callewaert [mailto:pieter.callewaert@be-mobile.be] 
> Sent: Thursday, November 07, 2013 4:22 AM
> To: user@cassandra.apache.org
> Subject: RE: Getting into Too many open files issues
>  
> Hi Murthy,
>  
> 32768 is a bit low (I know datastax docs recommend this). But our production env is now
running on 1kk, or you can even put it on unlimited.
>  
> Pieter
>  
> From: Murthy Chelankuri [mailto:kmurthy7@gmail.com] 
> Sent: donderdag 7 november 2013 12:46
> To: user@cassandra.apache.org
> Subject: Re: Getting into Too many open files issues
>  
> Thanks Pieter for giving quick reply.
> 
> I have downloaded  the tar ball. And have changed the limits.conf as per the documentation
like below.
> 
> * soft nofile 32768
> * hard nofile 32768
> root soft nofile 32768
> root hard nofile 32768
> * soft memlock unlimited
> * hard memlock unlimited
> root soft memlock unlimited
> root hard memlock unlimited
> * soft as unlimited
> * hard as unlimited
> root soft as unlimited
> root hard as unlimited
> 
> root soft/hard nproc 32000
> 
> 
> Some reason with in less than an hour cassandra node is opening 32768 files and cassandra
is not responding after that.
> 
> It is still not clear why cassadra is opening that many files and not closing properly
( does the laest cassandra 2.0.1 version have some bugs ).
> 
> what i have been experimenting is 300 writes per sec and 500 reads per sec.
> 
> And i have using 2 node cluster with 8 core cpu and 32GB RAM ( Virtuval Machines)
>  
> 
> Do we need to increase the nofile limts to more than 32768 ?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  
> 
>  
> 
> On Thu, Nov 7, 2013 at 4:55 PM, Pieter Callewaert <pieter.callewaert@be-mobile.be>
wrote:
> Hi Murthy,
>  
> Did you do a package install (.deb?) or you downloaded the tar?
> If the latest, you have to adjust the limits.conf file (/etc/security/limits.conf) to
raise the nofile (number of files open) for the cassandra user.
>  
> If you are using the .deb package, the limit is already raised to 100 000 files. (can
be found in /etc/init.d/cassandra, FD_LIMIT).
> However, with the 2.0.x I had to raise it to 1 000 000 because 100 000 was too low.
>  
> Kind regards,
> Pieter Callewaert
>  
> From: Murthy Chelankuri [mailto:kmurthy7@gmail.com] 
> Sent: donderdag 7 november 2013 12:15
> To: user@cassandra.apache.org
> Subject: Getting into Too many open files issues
>  
> I have experimenting cassandra latest version for storing the huge the in our application.
> 
> Write are doing good. but when comes to reads i have obsereved that cassandra is getting
into too many open files issues. When i check the logs its not able to open the cassandra
data files any more before of the file descriptors limits.
> 
> Can some one suggest me what i am going wrong what could be issues which causing the
read operating leads to Too many open files issue.


Mime
View raw message