lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rohit Kanchan <rohitkan2...@gmail.com>
Subject Re: Solr Delete By Id Out of memory issue
Date Sat, 25 Mar 2017 20:21:42 GMT
I think we figure out the issue, When we were conventing delete by query in
a Solr Handler we were not making a deep copy of BytesRef. We were making
reference of same object, which was causing old deletes(LinkedHasmap)
adding more than 1K entries.

But I think it is still not clearing those 1K entries. Eventually it will
throw OOM because UpdateLog is not singleton and when there will be many
delete by id and server is not re started for very long time then
eventually throw OOM. I think we should clear this map when we are
committing. I am not a committer,  it would be great if I get reply from a
committer.  What do you guys think?

Thanks
Rohit


On Wed, Mar 22, 2017 at 1:36 PM, Rohit Kanchan <rohitkan2000@gmail.com>
wrote:

> For commits we are relying on auto commits. We have define following in
> configs:
>
>        <autoCommit>
>
>             <maxDocs>10000</maxDocs>
>
>             <maxTime>30000</maxTime>
>
>             <openSearcher>false</openSearcher>
>
>         </autoCommit>
>
>         <autoSoftCommit>
>
>             <maxTime>15000</maxTime>
>
>         </autoSoftCommit>
>
> One thing which I would like to mention is that we are not calling
> directly deleteById from client. We have created an  update chain and added
> a processor there. In this processor we are querying first and collecting
> all byteRefHash and get each byteRef out of it and set it to indexedId.
> After collecting indexedId we are using those ids to call delete byId. We
> are doing this because we do not want query solr before deleting at client
> side. It is possible that there is a bug in this code but I am not sure,
> because when I run tests in my local it is not showing any issues. I am
> trying to remote debug now.
>
> Thanks
> Rohit
>
>
> On Wed, Mar 22, 2017 at 9:57 AM, Chris Hostetter <hossman_lucene@fucit.org
> > wrote:
>
>>
>> : OK, The whole DBQ thing baffles the heck out of me so this may be
>> : totally off base. But would committing help here? Or at least be worth
>> : a test?
>>
>> ths isn't DBQ -- the OP specifically said deleteById, and that the
>> oldDeletes map (only used for DBI) was the problem acording to the heap
>> dumps they looked at.
>>
>> I suspect you are correct about the root cause of the OOMs ... perhaps the
>> OP isn't using hard/soft commits effectively enough and the uncommitted
>> data is what's causing the OOM ... hard to say w/o more details. or
>> confirmation of exactly what the OP was looking at in their claim below
>> about the heap dump....
>>
>>
>> : > : Thanks for replying. We are using Solr 6.1 version. Even I saw that
>> it is
>> : > : bounded by 1K count, but after looking at heap dump I was amazed
>> how can it
>> : > : keep more than 1K entries. But Yes I see around 7M entries
>> according to
>> : > : heap dump and around 17G of memory occupied by BytesRef there.
>> : >
>> : > what exactly are you looking at when you say you see "7M entries" ?
>> : >
>> : > are you sure you aren't confusing the keys in oldDeletes with other
>> : > instances of BytesRef in the JVM?
>>
>>
>> -Hoss
>> http://www.lucidworks.com/
>>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message