lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anna Björk Nikulásdóttir <anna.b....@gmx.de>
Subject Re: Avoid automaton Memory Usage
Date Wed, 07 Aug 2013 17:18:14 GMT
Ah I see. I will look into the AnalyzingInfixSuggester. I suppose it could be useful as an
alternative rather to AnalyzingSuggester instead of FuzzySuggestor ?

What would help in my case as I use the same FST for both analyzers, if the same FST object
could be shared among both analyzers. So what I am doing is to use AnalyzingSuggester.store()
and use the stored file for AnalyzingSuggester.load() and FuzzySuggester.load().

Unfortunately there is no immutable FST class, but as I do not use it in mulithreaded environment,
that is probably not a problem, no ? A quick fix could be to copy the analyzer classes and
change these to such behaviour and reuse the FST object. Does this make sense functional wise
or do I have to expect problems ?

Would a patch for such behaviour make sense for the existing analyzer classes or is this use
case too specific ?

regards,

Anna.


Am 7.8.2013 um 14:01 schrieb Michael McCandless <lucene@mikemccandless.com>:

> Unfortunately, the FST based suggesters currently must be HEAP
> resident.  In theory this is fixable, e.g. if we could map the FST and
> then access it via DirectByteBuffer ... maybe open a Jira issue to
> explore this possibility?
> 
> You could also try AnalyzingInfixSuggester; it uses a "normal" Lucene
> index (though, it does load things up into in-memory DocValues fields
> by default).  And of course it differs from the other suggesters in
> that it's not "pure prefix" matching.  You can see it running at
> http://jirasearch.mikemccandless.com ... try typing fst, for example.
> 
> 
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com
> 
> 
> On Wed, Aug 7, 2013 at 9:32 AM, Anna Björk Nikulásdóttir
> <anna.b.nik@gmx.de> wrote:
>> Hi,
>> 
>> I am using Lucene 4.3 on Android for terms auto suggestions (>500.000). I am using
both FuzzySuggester and AnalyzingSuggester, each for their specific strengths. Everything
works great but my app consumes 69MB of RAM with most of that dedicated to the suggester classes.
This is too much for many older devices and Android imposes RAM limits for those.
>> As I understand, these suggester classes consume RAM because they use in memory automatons.
Is it possible - similar to Lucene indexes - to have these automatons rather on "disk" than
in memory or is there an alternative approach with similarly good results that works with
most data from disk/flash ?
>> 
>> regards,
>> 
>> Anna.
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
>> For additional commands, e-mail: java-user-help@lucene.apache.org
>> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message