Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 78A4C10276 for ; Thu, 7 Nov 2013 19:10:02 +0000 (UTC) Received: (qmail 44477 invoked by uid 500); 7 Nov 2013 19:09:59 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 44412 invoked by uid 500); 7 Nov 2013 19:09:59 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 44402 invoked by uid 99); 7 Nov 2013 19:09:58 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 07 Nov 2013 19:09:58 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of abarua@247-inc.com designates 213.199.154.11 as permitted sender) Received: from [213.199.154.11] (HELO emea01-am1-obe.outbound.protection.outlook.com) (213.199.154.11) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 07 Nov 2013 19:09:54 +0000 Received: from SINPR03MB139.apcprd03.prod.outlook.com (10.242.53.22) by SINPR03MB137.apcprd03.prod.outlook.com (10.242.53.13) with Microsoft SMTP Server (TLS) id 15.0.815.6; Thu, 7 Nov 2013 19:09:29 +0000 Received: from SINPR03MB139.apcprd03.prod.outlook.com ([169.254.12.167]) by SINPR03MB139.apcprd03.prod.outlook.com ([169.254.12.167]) with mapi id 15.00.0815.000; Thu, 7 Nov 2013 19:09:28 +0000 From: Arindam Barua To: "user@cassandra.apache.org" Subject: RE: Getting into Too many open files issues Thread-Topic: Getting into Too many open files issues Thread-Index: AQHO26rWiWQH7nyTBEqydr5LNlquv5oZoOEAgAAFpoCAAAoLAIAAcXwg Date: Thu, 7 Nov 2013 19:09:28 +0000 Message-ID: <267e4ea7a4984ad082d82d83d5b05e31@SINPR03MB139.apcprd03.prod.outlook.com> References: <1c3e690c63154758be5bfa4fb48ae8f6@AMSPR06MB069.eurprd06.prod.outlook.com> <887e3446f2434aee8492b26667e51c8c@AMSPR06MB069.eurprd06.prod.outlook.com> In-Reply-To: <887e3446f2434aee8492b26667e51c8c@AMSPR06MB069.eurprd06.prod.outlook.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [98.207.232.166] x-forefront-prvs: 00235A1EEF x-forefront-antispam-report: SFV:NSPM;SFS:(377454003)(199002)(189002)(57704003)(24454002)(71364002)(85306002)(63696002)(4396001)(19609705001)(50986001)(54356001)(53806001)(51856001)(46102001)(47976001)(81342001)(15202345003)(47446002)(87936001)(2656002)(77096001)(74316001)(47736001)(74876001)(49866001)(74662001)(15975445006)(74706001)(56776001)(59766001)(33646001)(80976001)(77982001)(76576001)(31966008)(69226001)(76786001)(81542001)(76796001)(74366001)(81686001)(66066001)(81816001)(83072001)(19580395003)(83322001)(65816001)(56816003)(19300405004)(19580405001)(16236675002)(87266001)(79102001)(54316002)(24736002);DIR:OUT;SFP:;SCL:1;SRVR:SINPR03MB137;H:SINPR03MB139.apcprd03.prod.outlook.com;CLIP:98.207.232.166;FPR:;RD:InfoNoRecords;A:1;MX:1;LANG:en; Content-Type: multipart/alternative; boundary="_000_267e4ea7a4984ad082d82d83d5b05e31SINPR03MB139apcprd03pro_" MIME-Version: 1.0 X-OriginatorOrg: 247-inc.com X-Virus-Checked: Checked by ClamAV on apache.org --_000_267e4ea7a4984ad082d82d83d5b05e31SINPR03MB139apcprd03pro_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable I see 100 000 recommended in the Datastax documentation for thenofile limit= since Cassandra 1.2 : http://www.datastax.com/documentation/cassandra/2.0/webhelp/cassandra/insta= ll/installRecommendSettings.html -Arindam From: Pieter Callewaert [mailto:pieter.callewaert@be-mobile.be] Sent: Thursday, November 07, 2013 4:22 AM To: user@cassandra.apache.org Subject: RE: Getting into Too many open files issues Hi Murthy, 32768 is a bit low (I know datastax docs recommend this). But our productio= n env is now running on 1kk, or you can even put it on unlimited. Pieter From: Murthy Chelankuri [mailto:kmurthy7@gmail.com] Sent: donderdag 7 november 2013 12:46 To: user@cassandra.apache.org Subject: Re: Getting into Too many open files issues Thanks Pieter for giving quick reply. I have downloaded the tar ball. And have changed the limits.conf as per th= e documentation like below. * soft nofile 32768 * hard nofile 32768 root soft nofile 32768 root hard nofile 32768 * soft memlock unlimited * hard memlock unlimited root soft memlock unlimited root hard memlock unlimited * soft as unlimited * hard as unlimited root soft as unlimited root hard as unlimited root soft/hard nproc 32000 Some reason with in less than an hour cassandra node is opening 32768 files= and cassandra is not responding after that. It is still not clear why cassadra is opening that many files and not closi= ng properly ( does the laest cassandra 2.0.1 version have some bugs ). what i have been experimenting is 300 writes per sec and 500 reads per sec. And i have using 2 node cluster with 8 core cpu and 32GB RAM ( Virtuval Mac= hines) Do we need to increase the nofile limts to more than 32768 ? On Thu, Nov 7, 2013 at 4:55 PM, Pieter Callewaert > wrote: Hi Murthy, Did you do a package install (.deb?) or you downloaded the tar? If the latest, you have to adjust the limits.conf file (/etc/security/limit= s.conf) to raise the nofile (number of files open) for the cassandra user. If you are using the .deb package, the limit is already raised to 100 000 f= iles. (can be found in /etc/init.d/cassandra, FD_LIMIT). However, with the 2.0.x I had to raise it to 1 000 000 because 100 000 was = too low. Kind regards, Pieter Callewaert From: Murthy Chelankuri [mailto:kmurthy7@gmail.com] Sent: donderdag 7 november 2013 12:15 To: user@cassandra.apache.org Subject: Getting into Too many open files issues I have experimenting cassandra latest version for storing the huge the in o= ur application. Write are doing good. but when comes to reads i have obsereved that cassand= ra is getting into too many open files issues. When i check the logs its no= t able to open the cassandra data files any more before of the file descrip= tors limits. Can some one suggest me what i am going wrong what could be issues which ca= using the read operating leads to Too many open files issue. --_000_267e4ea7a4984ad082d82d83d5b05e31SINPR03MB139apcprd03pro_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

 <= /p>

I see 100 000 recommended= in the Datastax documentation for thenofile limit since Cassandra 1.2 :

 <= /p>

http://= www.datastax.com/documentation/cassandra/2.0/webhelp/cassandra/install/inst= allRecommendSettings.html<= /p>

 <= /p>

-Arindam

 <= /p>

From: Pieter C= allewaert [mailto:pieter.callewaert@be-mobile.be]
Sent: Thursday, November 07, 2013 4:22 AM
To: user@cassandra.apache.org
Subject: RE: Getting into Too many open files issues

 

Hi Murthy,

 <= /p>

32768 is a bit low (I kno= w datastax docs recommend this). But our production env is now running on 1= kk, or you can even put it on unlimited.

 <= /p>

Pieter<= /p>

 = ;

From: Murthy C= helankuri [mailto:kmurthy7@gmail.com<= /a>]
Sent: donderdag 7 november 2013 12:46
To:
user@cassandra.apac= he.org
Subject: Re: Getting into Too many open files issues

 

= Thanks Pieter for giving quick reply.

I have downloaded  the tar= ball. And have changed the limits.conf as per the documentation like below= .

=
* soft nofile 32768
* hard nofile 32768
root soft nofile 32768
root hard nofile 32768
* soft memlock unlimited
* hard memlock unlimited
root soft memlock unlimited
root hard memlock unlimited
* soft as unlimited
* hard as unlimited
root soft as unlimited
root hard as unlimited

root soft/hard nproc 32000

= Some reason with in less than an hour cassandra node is opening 32768 files= and cassandra is not responding after that.

= It is still not clear why cassadra is opening that many files and not closi= ng properly ( does the laest cassandra 2.0.1 version have some bugs ).=

= what i have been experimenting is 300 writes per sec and 500 reads per sec.=

And i have using 2 node cluster= with 8 core cpu and 32GB RAM ( Virtuval Machines)

=  

Do we need to increase the nofi= le limts to more than 32768 ?

=








=  

=  

On Thu, Nov 7, 2013 at 4:55 PM,= Pieter Callewaert <pieter.callewaert@be-mobile.be> wrote:

Hi Murthy,<= o:p>

 =

Did you do a package install (.deb?) or= you downloaded the tar?

If the latest, you have to adjust the l= imits.conf file (/etc/security/limits.conf) to raise the nofile (number of files open) for the cassandra user.=

 =

If you are using the .deb package, the = limit is already raised to 100 000 files. (can be found in /etc/init.d/cassandra, FD_LIMIT).

However, with the 2.0.x I had to raise = it to 1 000 000 because 100 000 was too low.

 =

Kind regards,

Pieter Callewaert

 

From: Murthy Chelankuri [mai= lto:kmurthy7@gmail.= com]
Sent: donderdag 7 november 2013 12:15
To: u= ser@cassandra.apache.org
Subject: Getting into Too many open files issues

 

I have experimenting cassandra latest version for s= toring the huge the in our application.

Write are doing good. but when comes to reads i hav= e obsereved that cassandra is getting into too many open files issues. When= i check the logs its not able to open the cassandra data files any more before of the file descriptors limits.

Can some one suggest me what i am going wrong= what could be issues which causing the read operating leads to Too many op= en files issue.

 

--_000_267e4ea7a4984ad082d82d83d5b05e31SINPR03MB139apcprd03pro_--