lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "hawk.wan@139.com" <hawk....@139.com>
Subject Re: solr hangs
Date Mon, 07 Aug 2017 04:29:12 GMT
We found the problem is caused by the delete command. The request is used to delete document
by id.

 url --> http://10.91.1.120:8900/solr/taoke/update?&commit=true&wt=json
    body --> {"delete":["20ec36ade0ca4da3bcd78269e2300f6f"]}

When we send over 3000 requests, the Solr starts to give OOM exceptions.  

Now have changed the logic to put all ids in the array, it seems Solr works without any exception.

Not sure whether Solr internally optimized the delete.

Thanks
Hawk

> On 7 Aug 2017, at 9:20 AM, Erick Erickson <erickerickson@gmail.com> wrote:
> 
> You have several possibilities here:
> 1> you're hitting a massive GC pause that's timing out. You can turn
> on GC logging and analyze if that's the case.
> 2> your updates are getting backed up. At some point it's possible
> that the index writer blocks until merges are done IIUC.
> 
> Does this ever happen if you throttle your updates? Does it go away if
> you batch your documents in batches of, say, 1,000
> 
> Best,
> Erick
> 
> On Sun, Aug 6, 2017 at 5:19 PM, hawk.wan@139.com <hawk.wan@139.com> wrote:
>> Hi Eric,
>> 
>> I am using the restful api directly. In our application, system issues the http request
directly to Solr.
>> 
>> <autoCommit>
>>       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
>>       <maxDocs>10000</maxDocs>
>>       <openSearcher>true</openSearcher>
>> </autoCommit>
>> 
>> 
>> Thanks
>> Hawk
>> 
>> 
>> 
>>> On 6 Aug 2017, at 11:10 AM, Erick Erickson <erickerickson@gmail.com> wrote:
>>> 
>>> How are you updating 50K docs? SolrJ? If so are you using
>>> CloudSolrClient? What are your commit settings? Details matter.
>>> 
>>> Best,
>>> Erick
>>> 
>>> On Sat, Aug 5, 2017 at 6:19 PM, hawk <hawk@welikev.com> wrote:
>>>> Hi All,
>>>> 
>>>> I have encountered one problem of Solr. In our environment, we setup 2 Solr
nodes, every hour we will update request to Solr to update the documents, the total documents
are around 50k. From time to time, the Solr hangs and the client encounters the timeout issue.
>>>> 
>>>> below is the exception in Solr log.
>>>> 
>>>> 2017-08-06 07:28:03.682 ERROR (qtp401424608-31250) [c:taoke s:shard2 r:core_node2
x:taoke_shard2_replica2] o.a.s.s.HttpSolrCall null:java.io.IOException: java.util.concurrent.TimeoutException:
Idle timeout expired: 50000/50000 ms
>>>>       at org.eclipse.jetty.util.SharedBlockingCallback$Blocker.block(SharedBlockingCallback.java:219)
>>>>       at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:220)
>>>>       at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:583)
>>>>       at org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:55)
>>>> 
>>>> 
>>>> 
>>>> Thanks
>>> 
>> 
> 



Mime
View raw message