lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Hostetter <hossman_luc...@fucit.org>
Subject Re: Out of memory exception for big indexes
Date Fri, 06 Apr 2007 18:00:53 GMT

: Would it be fair to say that you can expect OutOfMemory errors if you
: run complex queries? ie sorts, boosts, weights...

not intrinsicly ... the amount of memory used has more to do with the size
of hte index and the sorting done then it does with teh number of clauses
in your query (of course, having more complex clauses - like
FunctionQueries that use FieldCaches) can use more memory.

: +(pathNodeId_2976569:1^5.0 pathNodeId_2976969:1 pathNodeId_2976255:1 pathNodeId_2976571:1)
+(pathClassId:1 pathClassId:346 pathClassId:314) -id:369

...the big thing that jumps out at me here is that you seem to be using
very dynamic field names for storing boolean values ... have your set the
OMIT_NORMS option on all of those fields? ... having norms makes your
index bigger, and contributes to the memory usage at query time 9and
doesn't add any benefit to a query like this )

: java.lang.OutOfMemoryError: Java heap space
: Dumping heap to java_pid4512.hprof ...
: Heap dump file created [71421503 bytes in 2.640 secs]
: Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
:         at org.apache.lucene.index.MultiReader.norms(MultiReader.java:173)

...that could indicate that norms are your problem, or you could already
have norms turned off on all but one field, and it's just the straw that
breaks the camels back ... looking at a visualization of your heap is the
only thing that's really going to tell you what is taking up all your ram.


-Hoss


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message