lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shawn Heisey <>
Subject Re: stateless solr ?
Date Tue, 05 Jul 2016 13:54:41 GMT
On 7/4/2016 7:46 AM, Lorenzo Fundaró wrote:
> I am trying to run Solr on my infrastructure using docker containers
> and Mesos. My problem is that I don't have a shared filesystem. I have
> a cluster of 3 shards and 3 replicas (9 nodes in total) so if I
> distribute well my nodes I always have 2 fallbacks of my data for
> every shard. Every solr node will store the index in its internal
> docker filesystem. My problem is that if I want to relocate a certain
> node (maybe an automatic relocation because of a hardware failure), I
> need to create the core manually in the new node because it's
> expecting to find the file in the data folder and of
> course it won't because the storage is ephemeral. Is there a way to
> make a new node join the cluster with no manual intervention ? 

The things you're asking sound like SolrCloud.  The rest of this message
assumes that you're running cloud.  If you're not, then we may need to
start over.

When you start a new node, it automatically joins the cluster described
by the Zookeeper database that you point it to.

SolrCloud will **NOT** automatically create replicas when a new node
joins the cluster.  There's no way for SolrCloud to know what you
actually want to use that new node for, so anything that it did
automatically might be completely the wrong thing.

Once you add a new node, you can replicate existing data to it with the
ADDREPLICA action on the Collections API:

If the original problem was a down node, you might also want to use the
DELETEREPLICA action to delete any replicas on the node that you lost
that are marked down:

Creating cores manually in your situation is not advisable.  The
CoreAdmin API should not be used when you're running in cloud mode.


View raw message