incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hiller, Dean" <>
Subject Re: too many open files
Date Mon, 15 Jul 2013 13:23:25 GMT
I believe too many open files is really too many open file descriptors so you may want to check
number of sockets open as well to see if you hit the open file descriptor limit.  Sockets
open a descriptor and count toward the limit I believe….I am quite rusty in this and this
is from my bad memory though…

To see sockets and files open file descriptors….


From: Brian Tarbox <<>>
Reply-To: "<>" <<>>
Date: Monday, July 15, 2013 7:16 AM
To: "<>" <<>>
Subject: Re: too many open files

Odd that this discussion happens now as I'm also getting this error.  I get a burst of error
messages and then the system continues...with no apparent ill effect.
I can't tell what the system was doing at the is the stack.  BTW Opscenter says
I only have 4 or 5 SSTables in each of my 6 CFs.

ERROR [ReadStage:62384] 2013-07-14 18:04:26,062 (line 135) Exception
in thread Thread[ReadStage:62384,5,main] /tmp_vol/cassandra/data/dev_a/portfoliodao/dev_a-portfoliodao-hf-166-Data.db
(Too many open files)
        at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(
        at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(
        at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(
        at org.apache.cassandra.db.CollationController.collectTimeOrderedData(
        at org.apache.cassandra.db.CollationController.getTopLevelColumns(
        at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(
        at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(
        at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(
        at org.apache.cassandra.db.Table.getRow(
        at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(
        at org.apache.cassandra.db.ReadVerbHandler.doVerb(
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
        at java.util.concurrent.ThreadPoolExecutor$
Caused by: /tmp_vol/cassandra/data/dev_a/portfoliodao/dev_a-portfoliodao-hf-166-Data.db
(Too many open files)
        at Method)
        ... 16 more

On Mon, Jul 15, 2013 at 7:23 AM, Michał Michalski <<>>
It doesn't tell you anything if file ends it with "ic-###", except pointing out the SSTable
version it uses ("ic" in this case).

Files related to secondary index contain something like this in the filename: <KS>-<CF>.<IDX-NAME>,
while in "regular" CFs do not contain any dots except the one just before file extension.


W dniu 15.07.2013 09:38, Paul Ingalls pisze:

Also, looking through the log, it appears a lot of the files end with ic-#### which I assume
is associated with a secondary index I have on the table.  Are secondary indexes really expensive
from a file descriptor standpoint?  That particular table uses the default compaction scheme...

On Jul 15, 2013, at 12:00 AM, Paul Ingalls <<>>

I have one table that is using leveled.  It was set to 10MB, I will try changing it to 256MB.
 Is there a good way to merge the existing sstables?

On Jul 14, 2013, at 5:32 PM, Jonathan Haddad <<>>

Are you using leveled compaction?  If so, what do you have the file size set at?  If you're
using the defaults, you'll have a ton of really small files.  I believe Albert Tobey recommended
using 256MB for the table sstable_size_in_mb to avoid this problem.

On Sun, Jul 14, 2013 at 5:10 PM, Paul Ingalls <<>>
I'm running into a problem where instances of my cluster are hitting over 450K open files.
 Is this normal for a 4 node 1.2.6 cluster with replication factor of 3 and about 50GB of
data on each node?  I can push the file descriptor limit up, but I plan on having a much larger
load so I'm wondering if I should be looking at something else….

Let me know if you need more info…


Jon Haddad
skype: rustyrazorblade

View raw message