zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steph van Schalkwyk <svanschalk...@gmail.com>
Subject Re: Configuring SolrCloud with Redundancy on Two Physical Frames
Date Tue, 01 May 2018 17:00:54 GMT
Adam,
More information here:
https://stackoverflow.com/questions/24694296/using-zookeeper-with-solr-but-only-have-2-servers
Unless ZK can be instantly "reconfigured" to consider the remaining 3 as a
full ensemble, I don't see an option.
Steph


+1.314.452.2896 (Tel/SMS)

On Tue, May 1, 2018 at 11:53 AM, Adam Blank <adam.blank@gmail.com> wrote:

> Thanks for your replies, Steph.  Adding back the rest of the mailing list.
> If anyone can shed some light on my predicament that would be much
> appreciated.
>
> Adam
>
> ---------- Forwarded message ----------
> From: Steph van Schalkwyk <svanschalkwyk@gmail.com>
> Date: Tue, May 1, 2018 at 12:44 PM
> Subject: Re: Configuring SolrCloud with Redundancy on Two Physical Frames
> To: Adam Blank <adam.blank@gmail.com>
>
>
> Maybe one of the ZK gurus could chime in? I could test it but I don't have
> the time right now.
>
>
> +1.314.452.2896 (Tel/SMS)
>
> On Tue, May 1, 2018 at 11:42 AM, Adam Blank <adam.blank@gmail.com> wrote:
>
> > I think I would still run into the same issue since if one frame goes
> down
> > I will only have 50% of ZK instances still up, and my understanding is
> that
> > a ZK cluster requires a majority to be up in order to operate. So if I
> have
> > 6 total I'd need 4 to be up.
> >
> >
> > On Tue, May 1, 2018, 12:34 PM Steph van Schalkwyk <
> svanschalkwyk@gmail.com>
> > wrote:
> >
> >> Only thing I can think of is to run three ZK instances on each hardware
> >> instance. That way if one fails you still have three running on the
> other
> >> hardware instance.
> >> Also, when you set up the SOLR instances, make sure you're sharding
> >> across hardware instances, for example two shards per collection on
> >> instance 0 and two on the other.
> >> S
> >>
> >> +1.314.452.2896 (Tel/SMS)
> >>
> >> On Tue, May 1, 2018 at 10:17 AM, Adam Blank <adam.blank@gmail.com>
> wrote:
> >>
> >>> Hi Steph,
> >>>
> >>> I should have provided some more info.  I am running on AIX.  If I'm
> >>> understanding your comment correctly, the issue I'm having isn't with
> being
> >>> able to run multiple ZK instances on a single server.  The issue is
> with
> >>> setting up ZK and SOLR in a way that it can survive either frame
> failure.
> >>> I wonder if setting up a virtual IP for the ZK instances and having
> SOLR
> >>> connect to the VIP would work if all ZK instances share the same data
> >>> directory on a shared drive?  I was hoping someone has encountered this
> >>> situation before, but if not, I can see if that idea would work.
> >>>
> >>> Thanks,
> >>> Adam
> >>>
> >>> On Tue, May 1, 2018 at 10:59 AM, Steph van Schalkwyk <
> >>> svanschalkwyk@gmail.com> wrote:
> >>>
> >>>> Adam, is it possible to virtualize in any way?
> >>>> As for single physical instances, I have been running three instances
> >>>> of ZK
> >>>> on one VM quite comfortably. This is only for dev/testing, though.
> >>>> Regards
> >>>> Steph
> >>>>
> >>>>
> >>>> +1.314.452.2896 (Tel/SMS)
> >>>>
> >>>> On Tue, May 1, 2018 at 9:55 AM, Adam Blank <adam.blank@gmail.com>
> >>>> wrote:
> >>>>
> >>>> > Hello,
> >>>> >
> >>>> > I would like to have a high-availability/redundant installation
of
> >>>> > Zookeeper running in my production environment. The problem is
that
> I
> >>>> only
> >>>> > have 2 physical frames available, so that rules out configuring
a
> >>>> Zookeeper
> >>>> > cluster/ensemble since I'd only have redundancy if the frame with
> the
> >>>> > minority of servers goes down. What is the best practice in this
> >>>> situation?
> >>>> > Is it possible to have a separate standalone install running on
each
> >>>> frame
> >>>> > connected to the same set of SOLR nodes or to use one server as
> >>>> primary and
> >>>> > one as backup?
> >>>> >
> >>>> > Thank you,
> >>>> > Adam
> >>>> >
> >>>>
> >>>
> >>>
> >>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message