lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From wwang525 <wwang...@gmail.com>
Subject Re: Planning Solr migration to production: clean and autoSoftCommit
Date Mon, 13 Jul 2015 17:48:39 GMT
Hi Erick,

That status request shows if the Solr instance is "busy" or "idle". I think
this is a doable option to check if the indexing process completed (idle) or
not (busy).

Now, I have some concern about the solution of not using the default polling
mechanism from the slave instance to the master instance.

The load test showed that the initial batches of requests got much longer
response time than later batches after the Solr server was started up.
Gradually, the performance got much better, presumably due to the cache
being warmed up .

I understand that the indexing process will commit the changes and also auto
warms queries in the existing cache. In this case, the indexing Solr
instance will be in a good shape to serve the requests after the indexing
process is completed. 

The question:

When the slave instances poll the indexing instance (master), do these slave
instances also auto warm queries in the existing cache? If it does, then the
polling mechanism will also make the slave instance more ready to server
requests (more performant) at any time.

When we talk about the "forced replication" solution, are we pushing
/overwriting all the old index files with the new index files? do we need to
restart Solr instance? In addition, will slave instances warmed up in any
way?

If there are too many issues with the "force replication", I might as well
work out the "incremental indexing" option. 

Thanks



--
View this message in context: http://lucene.472066.n3.nabble.com/Planning-Solr-migration-to-production-clean-and-autoSoftCommit-tp4216736p4217102.html
Sent from the Solr - User mailing list archive at Nabble.com.

Mime
View raw message