lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Otis Gospodnetic <otis_gospodne...@yahoo.com>
Subject Re: Getting a document by primary key
Date Mon, 03 Nov 2008 19:49:25 GMT
Is this your code or something from Solr?
That indexSearcher = new IndexSearcher(path_index) ; is very suspicious looking.
Are you creating a new IndexSearcher for every search request?  If so, that's the cause of
your memory problem.

Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



----- Original Message ----
> From: Marc Sturlese <marc.sturlese@gmail.com>
> To: solr-user@lucene.apache.org
> Sent: Monday, November 3, 2008 2:40:00 PM
> Subject: Re: Getting a document by primary key
> 
> 
> Hey there,
> I never run out of memory but I think the app always run to the limit... The
> problem seems to be in here (searching by term):
> try {
>             indexSearcher = new IndexSearcher(path_index) ;
>             
>             QueryParser queryParser = new QueryParser("id_field",
> getAnalyzer(stopWordsFile)) ;
>             Query query = queryParser.parse(query_string) ;
>             
>             Hits hits = indexSearcher.search(query) ;
>             
>             if(hits.length() > 0) {
>                 doc = hits.doc(0) ;
>             }
>             
>         } catch (Exception ex) {
>             
>         } finally {
>             if(indexSearcher != null) {
>                 try {
>                     indexSearcher.close() ;
>                 } catch(Exception e){} ;
>                 indexSearcher = null ;
>             }
>         }
> 
> As hits is deprecated I tried to use termdocs and top docs... but the memory
> problem never disapeared...
> If I call the garbage collector every time I use the upper code the memory
> doesn't increase undefinitely but... the app works soo slow.
> Any suggestion?
> Thanks for replaying!
> 
> 
> Yonik Seeley wrote:
> > 
> > On Sun, Nov 2, 2008 at 8:09 PM, Marc Sturlese 
> > wrote:
> >> I am doing the same and I am experimenting some trouble. I get the
> >> document
> >> data searching by term. The problem is that when I do it several times
> >> (inside a huge for) the app starts increasing the memory use until I use
> >> almost the whole memory...
> > 
> > That just sounds like the way Java's garbage collection tends to
> > work... do you ever run out of memory (and get an exception)?
> > 
> > -Yonik
> > 
> > 
> 
> -- 
> View this message in context: 
> http://www.nabble.com/Getting-a-document-by-primary-key-tp20072108p20309245.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Mime
View raw message