lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Markus Jelsma (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SOLR-3685) Solr Cloud sometimes skipped peersync attempt and replicated instead due to tlog flags not being cleared when no updates were buffered during a previous replication.
Date Mon, 20 Aug 2012 11:20:38 GMT

    [ https://issues.apache.org/jira/browse/SOLR-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13437790#comment-13437790
] 

Markus Jelsma commented on SOLR-3685:
-------------------------------------

To my surprise the RES for all nodes except the NIOFS node increased slowly over the past
three days and were still increasing today. The mmapped nodes used sometimes up to three times
the Xmx and, for some reason, about 1/2 Xmx of shared memory. We just restarted all nodes
with 9 using mmap and one using NIO, after restart the mmapped nodes immediately start to
use a lot more RES than the NIO node. The NIO node also uses much less shared memory.

Perhaps what i've seen before with NIO also crashing was due to some other issue.

So what we're seeing here is the mmapped nodes use more RES and SHR than the NIO node. VIRT
is as expected. I'll change another node to NIO and keep them running again for the next few
days and keep sending documents and firing queries.

All nodes are using august 20th trunk from now on.


                
> Solr Cloud sometimes skipped peersync attempt and replicated instead due to tlog flags
not being cleared when no updates were buffered during a previous replication.
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-3685
>                 URL: https://issues.apache.org/jira/browse/SOLR-3685
>             Project: Solr
>          Issue Type: Bug
>          Components: replication (java), SolrCloud
>    Affects Versions: 4.0-ALPHA
>         Environment: Debian GNU/Linux Squeeze 64bit
> Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43
>            Reporter: Markus Jelsma
>            Assignee: Yonik Seeley
>            Priority: Critical
>             Fix For: 4.0, 5.0
>
>         Attachments: info.log, oom-killer.log, pmap.log
>
>
> There's a serious problem with restarting nodes, not cleaning old or unused index directories
and sudden replication and Java being killed by the OS due to excessive memory allocation.
Since SOLR-1781 was fixed index directories get cleaned up when a node is being restarted
cleanly, however, old or unused index directories still pile up if Solr crashes or is being
killed by the OS, happening here.
> We have a six-node 64-bit Linux test cluster with each node having two shards. There's
512MB RAM available and no swap. Each index is roughly 27MB so about 50MB per node, this fits
easily and works fine. However, if a node is being restarted, Solr will consistently crash
because it immediately eats up all RAM. If swap is enabled Solr will eat an additional few
100MB's right after start up.
> This cannot be solved by restarting Solr, it will just crash again and leave index directories
in place until the disk is full. The only way i can restart a node safely is to delete the
index directories and have it replicate from another node. If i then restart the node it will
crash almost consistently.
> I'll attach a log of one of the nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message