lucene-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From luc...@digiatlas.org
Subject Re: problems with large Lucene index
Date Mon, 09 Mar 2009 14:25:53 GMT
Thanks Michael,

There is no sorting on the result (adding a sort causes OOM well  
before the point it runs out for the default).

There are no deleted docs - the index was created from a set of docs  
and no adds or deletes have taken place.

Memory isn't being consumed elsewhere in the system. It all comes down  
to the Lucene call via Hibernate Search. We decided to split our huge  
index into a set of several smaller indexes. Like the original single  
index, each smaller index has one field which is tokenized and the  
other fields have NO_NORMS set.

The following, explicitely specifying just one index, works fine:

org.hibernate.search.FullTextQuery fullTextQuery =  
fullTextSession.createFullTextQuery( outerLuceneQuery, MarcText2.class  
);

But as soon as we start adding further indexes:

org.hibernate.search.FullTextQuery fullTextQuery =  
fullTextSession.createFullTextQuery( outerLuceneQuery,  
MarcText2.class, MarcText8.class );

We start running into OOM.

In our case the MarcText2 index has a total disk size of 5Gb (with  
57589069 documents / 75491779 terms) and MarcText8 has a total size of  
6.46Gb (with 79339982 documents / 104943977 terms).

Adding all 8 indexes (the same as our original single index), either  
by explicitely naming them or just with:

org.hibernate.search.FullTextQuery fullTextQuery =  
fullTextSession.createFullTextQuery( outerLuceneQuery);

results in it becoming completely unusable.


One thing I am not sure about is that in Luke it tells me for an index  
(neither of the indexes mentioned above) that was created with  
NO_NORMS set on all the fields:

"Index functionality: lock-less, single norms, shared doc store,  
checksum, del count, omitTf"

Is this correct?  I am not sure what it means by "single norms" - I  
would have expected it to say "no norms".


Any further ideas on where to go from here? Your estimate of what is  
loaded into memory suggests that we shouldn't really be anywhere near  
running out of memory with these size indexes!

As I said in my OP, Luke also gets a heap error on searching our  
original single large index which makes me wonder if it is a problem  
with the construction of the index.



Quoting Michael McCandless <lucene@mikemccandless.com>:

>
> Lucene is trying to allocate the contiguous norms array for your index,
> which should be ~273 MB (=286/1024/1024), when it hits the OOM.
>
> Is your search sorting by field value?  (Which'd also consume memory.)
> Or it's just the default (by relevance) sort?
>
> The only other biggish consumer of memory should be the deleted docs,
> but that's a BitVector so it should need ~34 MB RAM.
>
> Can you run a memory profiler to see what else is consuming RAM?
>
> Mike
>
> lucene@digiatlas.org wrote:
>
>> Hello,
>>
>> I am using Lucene via Hibernate Search but the following problem is  
>>  also seen using Luke. I'd appreciate any suggestions for solving   
>> this problem.
>>
>> I have a Lucene index (27Gb in size) that indexes a database table   
>> of 286 million rows. While Lucene was able to perform this indexing  
>>  just fine (albeit very slowly), using the index has proved to be   
>> impossible. Any searches conducted on it, either from my Hibernate   
>> Search query or by placing the query into Luke give:
>>
>> java.lang.OutOfMemoryError: Java heap space
>> at org.apache.lucene.index.MultiReader.norms(MultiReader.java:271)
>> at org.apache.lucene.search.TermQuery$TermWeight.scorer(TermQuery.java:69)
>> at   
>> org.apache.lucene.search.BooleanQuery$BooleanWeight.scorer(BooleanQuery.java:230)
>> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:131)
>> ...
>>
>>
>> The type of queries are simple, of the form:
>>
>> (+value:church +marcField:245 +subField:a)
>>
>> which in this example should only return a few thousand results.
>>
>>
>> The interpreter is already running with the maximum of heap space   
>> allowed on for the Java executable running on Windows XP ( java   
>> -Xms 1200m -Xmx 1200m)
>>
>>
>> The Lucene index was created using the following Hibernate Search   
>> annotations:
>>
>> @Column
>> @Analyzer(impl=org.apache.lucene.analysis.SimpleAnalyzer.class)
>> @Field(index=org.hibernate.search.annotations.Index.NO_NORMS,   
>> store=Store.NO)
>> private Integer marcField;
>>
>> @Column (length = 2)
>> @Analyzer(impl=org.apache.lucene.analysis.SimpleAnalyzer.class)
>> @Field(index=org.hibernate.search.annotations.Index.NO_NORMS,   
>> store=Store.NO)
>> private String subField;
>>
>> @Column(length = 2)
>> @Analyzer(impl=org.apache.lucene.analysis.SimpleAnalyzer.class)
>> @Field(index=org.hibernate.search.annotations.Index.NO_NORMS,   
>> store=Store.NO)
>> private String indicator1;
>>
>> @Column(length = 2)
>> @Analyzer(impl=org.apache.lucene.analysis.SimpleAnalyzer.class)
>> @Field(index=org.hibernate.search.annotations.Index.NO_NORMS,   
>> store=Store.NO)
>> private String indicator2;
>>
>> @Column(length = 10000)
>> @Field(index=org.hibernate.search.annotations.Index.TOKENIZED,   
>> store=Store.NO)
>> private String value;
>>
>> @Column
>> @Analyzer(impl=org.apache.lucene.analysis.SimpleAnalyzer.class)
>> @Field(index=org.hibernate.search.annotations.Index.NO_NORMS,   
>> store=Store.NO)
>> private Integer recordId;
>>
>>
>> So all of the fields have NO NORMS except for "value" which is   
>> contains description text that needs to be tokenised.
>>
>> Is there any way around this?  Does Lucene really have such a low   
>> limit for how much data it can search (and I consider 286 million   
>> documents to be pretty small beer - we were hoping to index a table  
>>  of over a billion rows)? Or is there something I'm missing?
>>
>> Thanks.
>>
>>
>>




Mime
View raw message