lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James liu" <liuping.ja...@gmail.com>
Subject Re: Slave/Master swap
Date Thu, 21 Jun 2007 01:25:00 GMT
If just one master or  one slave server fail, i think u maybe can use master
index server.

shell controlled by program is easy for me. i use php  and shell_exec.


2007/6/21, Otis Gospodnetic <otis_gospodnetic@yahoo.com>:
>
> Right, that SAN con 2 Masters sounds good.  Lucky you with your lonely
> Master!  Where I work hw failures are pretty common.
>
> Otis
> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
> Simpy -- http://www.simpy.com/  -  Tag  -  Search  -  Share
>
> ----- Original Message ----
> From: Chris Hostetter <hossman_lucene@fucit.org>
> To: solr-user@lucene.apache.org
> Sent: Wednesday, June 20, 2007 11:43:02 PM
> Subject: Re: Slave/Master swap
>
>
>
> : The more expensive solution might be to have Solr instances run on top
> : of a SAN and then one could really have multiple Master instances, one
> : in stand-by mode and ready to be started as the new Master if the
>
> i *believe* that if you have two solr isntances pointed at the same
> physical data directory (SAN or otherwise) but you only send update/commit
> commands to one, they won't interfere with eachother.  so concievable you
> can have both masters up and running and your "failover" approach if the
> primary goes down is just to start sending updates to the secondary.
> you'll loose any unflushed changes that hte primary had in memory, but
> those are lost anyway.
>
> don't trust me on that though, test it out yourself.
>
> : curiosity, how does CNet handle Master redundancy?
>
> I don't know how much i'm allowed to talk about our processes and systems
> for redundency, disastery recovery, fallover, etc... but i don't think
> i'll upset anyone if i tell you: as far as i know, we've never needed to
> take advantage of them with a solr master.  ie: we've never had a solr
> master crash so hard we had to bring up another one in it's place ...
> knock on wood.  (that probably has more to do with having good hardware
> then anything else though).
>
> (and no, i honestly don't know what hardware we use ... i don't bother
> paying attention, i let hte hardware guys worry about that)
>
>
> -Hoss
>
>
>
>
>


-- 
regards
jl

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message