cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (JIRA)" <j...@apache.org>
Subject [jira] Updated: (CASSANDRA-2304) sstable2json dies with "Too many open files", regardless of ulimit
Date Thu, 10 Mar 2011 16:38:59 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-2304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jonathan Ellis updated CASSANDRA-2304:
--------------------------------------

    Attachment: 2304.txt

patch to close column iterator and only do one pass per row

> sstable2json dies with "Too many open files", regardless of ulimit
> ------------------------------------------------------------------
>
>                 Key: CASSANDRA-2304
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2304
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.1
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>            Priority: Minor
>             Fix For: 0.7.4
>
>         Attachments: 2304.txt, sstable.tar.bz2
>
>
> Running sstable2json on the attached sstable eventually results in the following:
> {code}
> Exception in thread "main" java.io.IOError: java.io.FileNotFoundException: /var/lib/cassandra/data/reddit/CommentSortsCache-f-9764-Data.db
(Too many open files)
>         at org.apache.cassandra.io.util.BufferedSegmentedFile.getSegment(BufferedSegmentedFile.java:68)
>         at org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:567)
>         at org.apache.cassandra.db.columniterator.SSTableSliceIterator.<init>(SSTableSliceIterator.java:49)
>         at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:68)
>         at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>         at org.apache.cassandra.tools.SSTableExport.serializeRow(SSTableExport.java:187)
>         at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:355)
>         at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:377)
>         at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:390)
>         at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:448)
> Caused by: java.io.FileNotFoundException: /var/lib/cassandra/data/reddit/CommentSortsCache-f-9764-Data.db
(Too many open files)
>         at java.io.RandomAccessFile.open(Native Method)
>         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
>         at org.apache.cassandra.io.util.BufferedRandomAccessFile.<init>(BufferedRandomAccessFile.java:111)
>         at org.apache.cassandra.io.util.BufferedRandomAccessFile.<init>(BufferedRandomAccessFile.java:106)
>         at org.apache.cassandra.io.util.BufferedRandomAccessFile.<init>(BufferedRandomAccessFile.java:91)
>         at org.apache.cassandra.io.util.BufferedSegmentedFile.getSegment(BufferedSegmentedFile.java:62)
> {code}
> Set my ulimit -n to 60000 and got the same result. Leaking file descriptors?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message