lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Erick Erickson <erickerick...@gmail.com>
Subject Re: Cluster without sharding
Date Wed, 12 Jul 2017 22:22:20 GMT
1> I would not do this. First there's the lock issues you mentioned.
But let's say replica1 is your indexer and replicas2 and 3 point to
the same index. When replica1 commits, now do replicas 2 and 3 know to
open a new searcher?

<2> and <3> just seem like variants of coupling Solr instances to
collections, which I'd advise against.

I'd just have a single collection with 1 shard and that shard has as
many replicas as you need, spread across as many Solr instances as you
want. CloudSolrClient takes care of load balancing with an internal
software load balancer and is aware of ZooKeeper so it can "do the
right thing". Updates get sent to all replicas and indexed locally. Do
not try to share indexes

You get all the HA/DR of SolrCloud.

If that doesn't work, _then_ worry about more complex schemes.

Best,
Erick

On Wed, Jul 12, 2017 at 12:58 PM, Mikhail Ibraheem
<mikhail.ibraheem@yahoo.com.invalid> wrote:
> Hi,We are using some features like collapse and joins that force us not to use sharding
for now. Still i am checking for possibilities for load balancing and high availability.1-
I think about using many solr instances run against the same shard file system. This way all
instances will work with the same data. I know there may be issues with synchronization and
open seachers. But my main concern with this, will we have some lock issues like deadlock
between instances?
> 2- Having some collections owned by each instance. For example if I have 9 collections
and 3 solr instances, I will divide the collections so that 3 collections owned by each instance.3-
Can I influence the order of the solrCloud Client? I mean if I have 3 instances ins1, ins2
and ins3. Am I able to ask the solrCloudClient to try ins1 first, then ins2 and finally try
ins3?
> Any more suggestions is more  than appreciated.
> ThanksMikhail

Mime
View raw message