lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Angie Rabelero <maria.rabel...@oracle.com>
Subject Re: Solr on HDFS
Date Thu, 01 Aug 2019 22:15:33 GMT
I don’t think you’re using claudera or ambari, but ambari has an option to delete the locks.
This seems more a configuration/architecture isssue than a realibility issue. You may want
to spin up an alias while you bring down, clear locks and directories, recreate and index
the affected collection, while you work your other isues.

On Aug 1, 2019, at 16:40, Joe Obernberger <joseph.obernberger@gmail.com> wrote:

Been using Solr on HDFS for a while now, and I'm seeing an issue with redundancy/reliability.
 If a server goes down, when it comes back up, it will never recover because of the lock files
in HDFS. That solr node needs to be brought down manually, the lock files deleted, and then
brought back up.  At that point, it appears to copy all the data for its replicas.  If the
index is large, and new data is being indexed, in some cases it will never recover. The replication
retries over and over.

How can we make a reliable Solr Cloud cluster when using HDFS that can handle servers coming
and going?

Thank you!

-Joe



Mime
View raw message