cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcus <shadow...@gmail.com>
Subject Re: HA issues
Date Fri, 16 Feb 2018 17:27:07 GMT
>From your other emails it sounds as though you do not have IPMI configured,
nor host HA enabled, correct? In this case, the correct thing to do is
nothing. If CloudStack cannot guarantee the VM state (as is the case with
an unreachable hypervisor), it should do nothing, for fear of causing a
split brain and corrupting the VM disk (VM running on two hosts).

Clustering and fencing is a tricky proposition. When CloudStack (or any
other cluster manager) is not configured to or cannot guarantee state then
things will simply lock up, in this case your HA VM on your broken
hypervisor will not run elsewhere. This has been the case for a long time
with CloudStack, HA would only start a VM after the original hypervisor
agent came back and reported no VM is running.

The new feature, from what I gather, simply adds the possibility of
CloudStack being able to reach out and shut down the hypervisor to
guarantee state. At that point it can start the VM elsewhere. If something
fails in that process (IPMI unreachable, for example, or bad credentials),
you're still going to be stuck with a VM not coming back.

It's the nature of the thing. I'd be wary of any HA solution that does not
reach out and guarantee state via host or storage fencing before starting a
VM elsewhere, as it will be making assumptions. Its entirely possible a VM
might be unreachable or unable to access it storage for a short while, a
new instance of the VM is started elsewhere, and the original VM comes back.

On Wed, Jan 17, 2018 at 9:02 AM Nux! <nux@li.nux.ro> wrote:

> Hi Rohit,
>
> I've reinstalled and tested. Still no go with VM HA.
>
> What I did was to kernel panic that particular HV ("echo c >
> /proc/sysrq-trigger" <- this is a proper way to simulate a crash).
> What happened next is the HV got marked as "Alert", the VM on it was all
> the time marked as "Running" and it was not migrated to another HV.
> Once the panicked HV has booted back the VM reboots and becomes available.
>
> I'm running on CentOS 7 mgmt + HVs and NFS primary and secondary storage.
> The VM has HA enabled service offering.
> Host HA or OOBM configuration was not touched.
>
> Full log http://tmp.nux.ro/W3s-management-server.log
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> ----- Original Message -----
> > From: "Rohit Yadav" <rohit.yadav@shapeblue.com>
> > To: "dev" <dev@cloudstack.apache.org>
> > Sent: Wednesday, 17 January, 2018 12:13:33
> > Subject: Re: HA issues
>
> > I performed VM HA sanity checks and was not able to reproduce any
> regression
> > against two KVM CentOS7 hosts in a cluster.
> >
> >
> > Without the "Host HA" feature, I deployed few HA-enabled VMs on a KVM
> host2 and
> > killed it (powered off). After few minutes of CloudStack attempting to
> find why
> > the host (kvm agent) timed out, CloudStack kicked investigators, that
> > eventually led KVM fencers to work and VM HA job kicked to start those
> few VMs
> > on host1 and the KVM host2 was put to "Down" state.
> >
> >
> > - Rohit
> >
> > <https://cloudstack.apache.org>
> >
> >
> >
> > ________________________________
> >
> > rohit.yadav@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> > From: Rohit Yadav
> > Sent: Wednesday, January 17, 2018 2:39:19 PM
> > To: dev
> > Subject: Re: HA issues
> >
> >
> > Hi Lucian,
> >
> >
> > The "Host HA" feature is entirely different from VM HA, however, they
> may work
> > in tandem, so please stop using the terms interchangeably as it may
> cause the
> > community to believe a regression has been caused.
> >
> >
> > The "Host HA" feature currently ships with only "Host HA" provider for
> KVM that
> > is strictly tied to out-of-band management (IPMI for fencing, i.e power
> off and
> > recovery, i.e. reboot) and NFS (as primary storage). (We also have a
> provider
> > for simulator, but that's for coverage/testing purposes).
> >
> >
> > Therefore, "Host HA" for KVM (+nfs) currently works only when OOBM is
> enabled.
> > The frameowkr allows interested parties may write their own HA providers
> for a
> > hypervisor that can use a different strategy/mechanism for
> fencing/recovery of
> > hosts (including write a non-IPMI based OOBM plugin) and host/disk
> activity
> > checker that is non-NFS based.
> >
> >
> > The "Host HA" feature ships disabled by default and does not cause any
> > interference with VM HA. However, when enabled and configured correctly,
> it is
> > a known limitation that when it is unable to successfully perform
> recovery or
> > fencing tasks it may not trigger VM HA. We can discuss how to handle
> such cases
> > (thoughts?). "Host HA" would try couple of times to recover and failing
> to do
> > so, it would eventually trigger a host fencing task. If it's unable to
> fence a
> > host, it will indefinitely attempt to fence the host (the host state
> will be
> > stuck at fencing state in cloud.ha_config table for example) and alerts
> will be
> > sent to admin who can do some manual intervention to handle such
> situations (if
> > you've email/smtp enabled, you should see alert emails).
> >
> >
> > We can discuss how to improve and have a workaround for the case you've
> hit,
> > thanks for sharing.
> >
> >
> > - Rohit
> >
> > ________________________________
> > From: Nux! <nux@li.nux.ro>
> > Sent: Tuesday, January 16, 2018 10:42:35 PM
> > To: dev
> > Subject: Re: HA issues
> >
> > Ok, reinstalled and re-tested.
> >
> > What I've learned:
> >
> > - HA only works now if OOB is configured, the old way HA no longer
> applies -
> > this can be good and bad, not everyone has IPMIs
> >
> > - HA only works if IPMI is reachable. I've pulled the cord on a HV and
> HA failed
> > to do its thing, leaving me with a HV down along with all the VMs running
> > there. That's bad.
> > I've opened this ticket for it:
> > https://issues.apache.org/jira/browse/CLOUDSTACK-10234
> >
> > Let me know if you need any extra info or stuff to test.
> >
> > Regards,
> > Lucian
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> > ----- Original Message -----
> >> From: "Nux!" <nux@li.nux.ro>
> >> To: "dev" <dev@cloudstack.apache.org>
> >> Sent: Tuesday, 16 January, 2018 11:35:58
> >> Subject: Re: HA issues
> >
> >> I'll reinstall my setup and try again, just to be sure I'm working on a
> clean
> >> slate.
> >>
> >> --
> >> Sent from the Delta quadrant using Borg technology!
> >>
> >> Nux!
> >> www.nux.ro
> >>
> >> ----- Original Message -----
> >>> From: "Rohit Yadav" <rohit.yadav@shapeblue.com>
> >>> To: "dev" <dev@cloudstack.apache.org>
> >>> Sent: Tuesday, 16 January, 2018 11:29:51
> >>> Subject: Re: HA issues
> >>
> >>> Hi Lucian,
> >>>
> >>>
> >>> If you're talking about the new HostHA feature (with KVM+nfs+ipmi),
> please refer
> >>> to following docs:
> >>>
> >>>
> http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/hosts.html#out-of-band-management
> >>>
> >>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Host+HA
> >>>
> >>>
> >>> We'll need to you look at logs perhaps create a JIRA ticket with the
> logs and
> >>> details? If you saw ipmi based reboot, then host-ha indeed tried to
> recover
> >>> i.e. reboot the host, once hostha has done its work it would schedule
> HA for VM
> >>> as soon as the recovery operation succeeds (we've simulator and kvm
> based
> >>> marvin tests for such scenarios).
> >>>
> >>>
> >>> Can you see it making attempt to schedule VM ha in logs, or any
> failure?
> >>>
> >>>
> >>> - Rohit
> >>>
> >>> <https://cloudstack.apache.org>
> >>>
> >>>
> >>>
> >>> ________________________________
> >>> From: Nux! <nux@li.nux.ro>
> >>> Sent: Tuesday, January 16, 2018 12:47:56 AM
> >>> To: dev
> >>> Subject: [4.11] HA issues
> >>>
> >>> Hi,
> >>>
> >>> I see there's a new HA engine for KVM and IPMI support which is really
> nice,
> >>> however it seems hit and miss.
> >>> I have created an instance with HA offering, kernel panicked one of the
> >>> hypervisors - after a while the server was rebooted via IPMI probably,
> but the
> >>> instance never moved to a running hypervisor and even after the o
> <https://maps.google.com/?q=to+a+running+hypervisor+and+even+after+the+o&entry=gmail&source=g>
> riginal
> >>> hypervisor came back it was still left in Stopped state.
> >>> Is there any extra things I need to set up to have proper HA?
> >>>
> >>> Regards,
> >>> Lucian
> >>>
> >>> --
> >>> Sent from the Delta quadrant using Borg technology!
> >>>
> >>> Nux!
> >>> www.nux.ro
> >>>
> >>> rohit.yadav@shapeblue.com
> >>> www.shapeblue.com<http://www.shapeblue.com>
> >>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > @shapeblue
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message