lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Modassar Ather <modather1...@gmail.com>
Subject Re: Zookeeper state and its effect on Solr cluster.
Date Tue, 28 Jul 2015 04:21:06 GMT
Erick I am using the ZK upload process only. It is just that it is added
into a script.
The exception is coming when I am doing a RELOAD of collection after the ZK
restart and fresh schema/solrconfig is uploaded.
And once this exception occurs I have to restart the Solr nodes to get them
working.

Thanks,
Modassar

On Tue, Jul 28, 2015 at 1:05 AM, Erick Erickson <erickerickson@gmail.com>
wrote:

> Why are you doing this? It seems like you're making it
> _much_ more difficult than necessary. Sure, automate
> all the non-solr stuff, but why not make your scripts use
> the ZK upload/download process that's well established
> and tested for maintaining the Solr specific data?
>
> Best,
> Erick
>
> On Mon, Jul 27, 2015 at 9:48 AM, Modassar Ather <modather1981@gmail.com>
> wrote:
> > Thanks for your response Erick and Shawn.
> >
> > We had automated the solr/zookeeper future upgrades using scripts. So for
> > any new version of solr/zookeeper we use those script.
> > While upgrading zookeeper we do stop it to install it as a service and
> then
> > apply the new distribution(which is currently 3.4.6) and restart. Content
> > of zoo_data is not deleted.
> > After that the solr configs are uploaded. In this process of zookeeper
> > upgrade solr nodes are not restarted.
> > After this upgrade process I have seen all the nodes active. There are
> > connection related exception in solr log for the time the zookeeper was
> > stopped.
> >
> > Our indexer again uploads the configs to accommodate any possible changes
> > in schema or solrconfig which passes every time and then during reload of
> > collection we are getting following exception intermittently.
> >
> > {"responseHeader":{"status":500,"QTime":180028},"error":{"msg":"reload
> the
> > collection time out:180s","trace":"org.apache.solr.common.SolrException:
> > reload the collection time out:180s\n\tat
> >
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:237)\n\tat
> >
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:168)\n\tat
> >
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)\n\tat
> >
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:660)\n\tat
> > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:431)\n\tat
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)\n\tat
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)\n\tat
> >
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)\n\tat
> >
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)\n\tat
> >
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
> >
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)\n\tat
> >
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)\n\tat
> >
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)\n\tat
> >
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)\n\tat
> >
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
> >
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)\n\tat
> >
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
> >
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)\n\tat
> >
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)\n\tat
> >
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)\n\tat
> > org.eclipse.jetty.server.Server.handle(Server.java:497)\n\tat
> > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)\n\tat
> >
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)\n\tat
> >
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)\n\tat
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)\n\tat
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)\n\tat
> > java.lang.Thread.run(Thread.java:745)\n","code":500}}
> >
> > Regards,
> > Modassar
> >
> >
> >
> > On Mon, Jul 27, 2015 at 8:45 PM, Shawn Heisey <apache@elyograg.org>
> wrote:
> >
> >> On 7/27/2015 6:17 AM, Modassar Ather wrote:
> >> > Kindly help me understand following with respect to Solr version
> 5.2.1.
> >> >
> >> > 1. What happens to the solr cluster if the standalone external
> zookeeper
> >> is
> >> > stopped/restarted with some changes done in zoo_data during the
> restart?
> >> >     E.g After restarting the zookeeper the solr configs are reloaded
> with
> >> > changes. Please note that solr cluster is not restarted.
> >> > 2. In what conditions of zookeeper restart the solr nodes are
> required to
> >> > be restarted?
> >>
> >> If zookeeper loses quorum, SolrCloud goes read-only.  Updates won't be
> >> possible until zookeeper has quorum again.  If zookeeper goes away
> >> completely, I think the result might be the same, but I do not know.
> >>
> >> For changes in zookeeper related to core configuration, simply reloading
> >> affected collections with the Collections API is enough.  For more
> >> extensive changes, especially to things like the clusterstate,
> >> restarting all Solr nodes might be required.  If you give us specifics
> >> about what you want to change, we can figure out exactly what actions
> >> are needed.
> >>
> >> Thanks,
> >> Shawn
> >>
> >>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message