lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gully Burns <gullybu...@gmail.com>
Subject MergeException: java.io.IOException: File too large
Date Mon, 10 Sep 2012 01:03:53 GMT
Hey Everyone,

I'm building a solr store on version 3.6.1 and I encounter the following
error when the system gets to about 1,000,000 documents.


commit{dir=/lfs1/code/apache-solr-3.6.1/vpdmfSolr/digitalLibrary/articlecitation/data/index,segFN=segments_3jki,version=1345856472880,generation=165330,filenames=[....
LONG LIST OF INDEX FILES .... ]
Sep 8, 2012 9:30:11 PM org.apache.solr.core.SolrDeletionPolicy updateCommits
INFO: newest commit = 1345856472880
Sep 8, 2012 9:30:11 PM org.apache.solr.update.processor.LogUpdateProcessor
finish
INFO: {add=[8343524, 8343525, 8343526, 8343527, 8343528, 8343529, 8343530,
8343531, ... (100 adds)]} 0 60
Sep 8, 2012 9:30:11 PM org.apache.solr.core.SolrCore execute
INFO: [ArticleCitation] webapp=/solr path=/update
params={wt=xml&version=2.2} status=0 QTime=60
Sep 8, 2012 9:30:11 PM org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start
commit(optimize=false,waitFlush=true,waitSearcher=true,expungeDeletes=false)

Exception in thread "Lucene Merge Thread #0"
org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException:
File too large
        at
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:509)
        at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
Caused by: java.io.IOException: File too large
        at java.io.RandomAccessFile.writeBytes(Native Method)
        at java.io.RandomAccessFile.write(RandomAccessFile.java:499)
        at
org.apache.lucene.store.FSDirectory$FSIndexOutput.flushBuffer(FSDirectory.java:448)
        at
org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:99)
        at
org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:88)
        at
org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:113)
        at
org.apache.lucene.store.FSDirectory$FSIndexOutput.close(FSDirectory.java:458)
        at org.apache.lucene.util.IOUtils.close(IOUtils.java:141)
        at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:139)
        at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:232)
        at
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:107)
        at
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3908)
        at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)
        at
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)
Sep 8, 2012 9:30:12 PM org.apache.solr.core.SolrDeletionPolicy onCommit
INFO: SolrDeletionPolicy.onCommit: commits:num=2

My solrconfig.xml file has the following indexing parameters:

 <indexConfig>
    <maxFieldLength>10000</maxFieldLength>
    <writeLockTimeout>1000</writeLockTimeout>
    <ramBufferSizeMB>2048</ramBufferSizeMB>
    <useCompoundFile>false</useCompoundFile>
    <mergeFactor>25</mergeFactor>
  </indexConfig>

There are 451 separate index files, the largest of which is 60GB.

The system is redhat linux simply running the jetty version of the solr
server.

Note that the system seems to work fine, even with the error. The number of
files in the index seems very large.

Having these exceptions worries me, but the system seems to operate fine
even then.

Any help would be very appreciated.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message