lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hoss Man (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SOLR-5081) Highly parallel document insertion hangs SolrCloud
Date Tue, 06 Aug 2013 02:16:48 GMT

    [ https://issues.apache.org/jira/browse/SOLR-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13730269#comment-13730269
] 

Hoss Man commented on SOLR-5081:
--------------------------------

bq. I actually did this exact test when I was in this state originally, and the insert worked,
which totally confused the situation for me. 

ok ... hold up ... basically what you're saying is "the first time i saw this problem (solrcloud
"hangs" and is "deadlocked" under heavy document insertion load) i tried to insert a single
document and it worked."

...which makes no sense to me because if that's the case, then what exactly do you mean by
"hangs" and "deadlocked" ? 

So let's back up: 

* what do you observe about your system that leads you to believe there is a problem? 
* what aspect of your observations doesn't match what you expect?
* what do you expect to observe? 
* how are you making these observations? 

Wild shot in the dark: what does your indexng code look like?  is it possible that your indexing
code is encountering some "deadlock" of it's own, independent of anything happening in solr?
 If you are using solrj, can you get thread dumps from your indexing client apps when you
observe this "deadlock" sitaution (again: this info is useless unless we have a better understanding
of what exactly you are observing that you think indicates a problem)




                
> Highly parallel document insertion hangs SolrCloud
> --------------------------------------------------
>
>                 Key: SOLR-5081
>                 URL: https://issues.apache.org/jira/browse/SOLR-5081
>             Project: Solr
>          Issue Type: Bug
>          Components: SolrCloud
>    Affects Versions: 4.3.1
>            Reporter: Mike Schrag
>         Attachments: threads.txt
>
>
> If I do a highly parallel document load using a Hadoop cluster into an 18 node solrcloud
cluster, I can deadlock solr every time.
> The ulimits on the nodes are:
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 1031181
> max locked memory       (kbytes, -l) unlimited
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 32768
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 10240
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 515590
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> The open file count is only around 4000 when this happens.
> If I bounce all the servers, things start working again, which makes me think this is
Solr and not ZK.
> I'll attach the stack trace from one of the servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message