lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vadim Ivanov" <>
Subject RE: What's the deal with dataimporthandler overwriting indexes?
Date Tue, 12 Feb 2019 16:42:47 GMT
If clean=true then index will be replaced completely by the new import. That is how it is
supposed to work.
If you don't want preemptively delete your index set &clean=false. And set &commit=true
instead of &optimize=true
Are you sure about optimize? Do you really need it? Usually it's very costly.
So, I'd try:

If nevertheless nothing imported, please check the log

> -----Original Message-----
> From: Joakim Hansson []
> Sent: Tuesday, February 12, 2019 12:47 PM
> To:
> Subject: What's the deal with dataimporthandler overwriting indexes?
> Hi!
> We are currently upgrading from solr 6.2 master slave setup to solr 7.6
> running solrcloud.
> I dont know if I've missed something really trivial, but everytime I start
> a full import (dataimport?command=full-import&clean=true&optimize=true)
> the
> old index gets overwritten by the new import.
> In 6.2 this wasn't really a problem since I could disable replication in
> the API on the master and enable it once the import was completed.
> With 7.6 and solrcloud we use NRT-shards and replicas since those are the
> only ones that support rule-based replica placement and whenever I start a
> new import the old index is overwritten all over the solrcloud cluster.
> I have tried changing to clean=false, but that makes the import finish
> without adding any docs.
> Doesn't matter if I use soft or hard commits.
> I don't get the logic in this. Why would you ever want to delete an
> existing index before there is a new one in place? What is it I'm missing
> here?
> Please enlighten me.

View raw message