lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Otis Gospodnetic <>
Subject Re: Slave/Master swap
Date Wed, 20 Jun 2007 21:54:38 GMT
Right, that SAN con 2 Masters sounds good.  Lucky you with your lonely Master!  Where I work
hw failures are pretty common.

 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simpy --  -  Tag  -  Search  -  Share

----- Original Message ----
From: Chris Hostetter <>
Sent: Wednesday, June 20, 2007 11:43:02 PM
Subject: Re: Slave/Master swap

: The more expensive solution might be to have Solr instances run on top
: of a SAN and then one could really have multiple Master instances, one
: in stand-by mode and ready to be started as the new Master if the

i *believe* that if you have two solr isntances pointed at the same
physical data directory (SAN or otherwise) but you only send update/commit
commands to one, they won't interfere with eachother.  so concievable you
can have both masters up and running and your "failover" approach if the
primary goes down is just to start sending updates to the secondary.
you'll loose any unflushed changes that hte primary had in memory, but
those are lost anyway.

don't trust me on that though, test it out yourself.

: curiosity, how does CNet handle Master redundancy?

I don't know how much i'm allowed to talk about our processes and systems
for redundency, disastery recovery, fallover, etc... but i don't think
i'll upset anyone if i tell you: as far as i know, we've never needed to
take advantage of them with a solr master.  ie: we've never had a solr
master crash so hard we had to bring up another one in it's place ...
knock on wood.  (that probably has more to do with having good hardware
then anything else though).

(and no, i honestly don't know what hardware we use ... i don't bother
paying attention, i let hte hardware guys worry about that)


View raw message