lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bala Kolla (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (LUCENE-6842) No way to limit the fields cached in memory and leads to OOM when there are thousand of fields (thousands)
Date Mon, 19 Oct 2015 12:49:05 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14963251#comment-14963251
] 

Bala Kolla edited comment on LUCENE-6842 at 10/19/15 12:48 PM:
---------------------------------------------------------------

Yes we literally have thousands of fields (may be a million), I will come back to you with
the exact number of fields. BTW, I am not really seeing OOM as our application stops accepting
the requests once the heap is exhausted. 
[~dawid.weiss], I don't think the core issue is the Java version. I think its to do with the
many number of unique fields as suggested by [~mikemccand].
Also, this memory exhaustion happens by simply opening the index and starting a new search.
I am guessing that its trying to load all the fields into memory and wanted to know if there
is a way to either limit the number of fields loaded. I would take a hit on performance by
limiting the fields loaded into memory than stopping the entire application.
If there is no way to limit the fields loaded into memory, then I will go back to my team
and question the indexing rules that we have.

Thanks and appreciate your help.


was (Author: kbkreddy):
Yes we literally have thousands of fields (may be a million), I will come back to you with
the exact number of fields. BTW, I am not really seeing OOM as our application stops accepting
the requests once the exhausted. 
[~dawid.weiss], I don't think the core issue is the Java version. I think its to do with the
many number of unique fields as suggested by [~mikemccand].
Also, this memory exhaustion happens by simply opening the index and starting a new search.
I am guessing that its trying to load all the fields into memory and wanted to know if there
is a way to either limit the number of fields loaded. I would take a hit on performance by
limiting the fields loaded into memory than stopping the entire application.
If there is no way to limit the fields loaded into memory, then I will go back to my team
and question the indexing rules that we have.

Thanks and appreciate your help.

> No way to limit the fields cached in memory and leads to OOM when there are thousand
of fields (thousands)
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-6842
>                 URL: https://issues.apache.org/jira/browse/LUCENE-6842
>             Project: Lucene - Core
>          Issue Type: Bug
>          Components: core/search
>    Affects Versions: 4.6.1
>         Environment: Linux, openjdk 1.6.x
>            Reporter: Bala Kolla
>         Attachments: HistogramOfHeapUsage.png
>
>
> I am opening this defect to get some guidance on how to handle a case of server running
out of memory and it seems like it's something to do how we index. But want to know if there
is anyway to reduce the impact of this on memory usage before we look into the way of reducing
the number of fields. 
> Basically we have many thousands of fields being indexed and it's causing a large amount
of memory being used (25GB) and eventually leading to application to hang and force us to
restart every few minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message