lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ivan Vasilev <>
Subject Re: Out of memory exception for big indexes
Date Mon, 23 Apr 2007 19:09:03 GMT
Hi All,
I put this problem in the forum but I had no chance to work on it last 
week unfurtunately...
So now I tested the Artem's patch but the results show:
1) speed is very slow compare with the usage without patch
2) There are not very big differences of memory usage (I tested till now 
only with relativly small indexes - less than 1 GB and less than 1 mil 
docs because the when using with 20-40 GB indexes I had to wait more 
than 5 mins what is practically usless).

So I have doubts if I use the patch correctly. I do just what is 
described in Artem's letter:

AV> You can include StoredFieldSortFactory class source file into your sources and
AV> then use StoredFieldSortFactory.create(sortFieldName, sortDescending) to get
AV> Sort object for sorting query.
AV> StoredFieldSortFactory source file can be extracted from LUCENE-769 patch or
AV> from sharehound sources:*checkout*/sharehound/jNetCrawler/src/java/org/apache/lucene/search/

What I am wondering about is that in the patch commetns 
( I see that there is 
written that patch solves the problem by using WeakHashMap, but actually 
in the downloaded  file there is not used 
WeakHashMap. Another thing: In the comments in Lucene-769 issue there is 
mentioned something about classes like: WeakDocumentsCache and 
DocCachingIndexReader but I did not found them in Lucene source code 
neither as classes in So my questions are:
1. Is it enought to include the file in the 
source code or there are also other classes that I have to douwnload and 
2. Have I to use this DocCachingIndexReader instead of Reader that I 
currently use in cases when I expect OOMException and will use this patch?

Thanks to all once again :),

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message