lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bernd Fehling <>
Subject Re: question about updates to shard leaders only
Date Tue, 15 May 2018 06:12:07 GMT
OK, I have the CloudSolrClient with SolrJ now running but it seams
a bit slower compared to ConcurrentUpdateSolrClient.
This was not expected.
The logs show that CloudSolrClient send the docs only to the leaders.

So the only advantage of CloudSolrClient is that it is "Cloud aware"?

With ConcurrentUpdateSolrClient I get about 1600 docs/sec for loading.
With CloudSolrClient I get only about 1200 docs/sec.

The system monitoring shows that with CloudSolrClient all nodes and cores
are under heavy load. I thought that only the leaders are under load
until any commit and then replicate to the other replicas.
And that the replicas which are no leader have capacity to answer search requests.

I think I still don't get the advantage of CloudSolrClient?


Am 09.05.2018 um 19:15 schrieb Erick Erickson:
> You may not need to deal with any of this.
> The default CloudSolrClient call creates a new LBHttpSolrClient for
> you. So unless you're doing something custom with any LBHttpSolrClient
> you create, you don't need to create one yourself.
> Second, the default for CloudSolrClient.add() is to take the list of
> documents you provide into sub-lists that consist of the docs destined
> for a particular shard and sends those to the leader.
> Do the default not work for you?
> Best,
> Erick
> On Wed, May 9, 2018 at 2:54 AM, Bernd Fehling
> <> wrote:
>> Hi list,
>> while going from single core master/slave to cloud multi core/node
>> with leader/replica I want to change my SolrJ loading, because
>> ConcurrentUpdateSolrClient isn't cloud aware and has performance
>> impacts.
>> I want to use CloudSolrClient with LBHttpSolrClient and updates
>> should only go to shard leaders.
>> Question, what is the difference between sendUpdatesOnlyToShardLeaders
>> and sendDirectUpdatesToShardLeadersOnly?
>> Regards,
>> Bernd

View raw message