cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marcus Sorensen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CLOUDSTACK-8943) KVM HA is broken, let's fix it
Date Thu, 15 Oct 2015 21:15:05 GMT

    [ https://issues.apache.org/jira/browse/CLOUDSTACK-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959629#comment-14959629
] 

Marcus Sorensen commented on CLOUDSTACK-8943:
---------------------------------------------

One thing I'd point out is that in the case of multiple primary storages, it's probably not
wrong to remove a host that can't reach one of the primary storages. If the admin expects
HA to work, it needs to adhere to the least common denominator, rather than only killing the
host when it cannot run any VMs or reach any storage.

> KVM HA is broken, let's fix it
> ------------------------------
>
>                 Key: CLOUDSTACK-8943
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8943
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>         Environment: Linux distros with KVM/libvirt
>            Reporter: Nux
>
> Currently KVM HA works by monitoring an NFS based heartbeat file and it can often fail
whenever this network share becomes slower, causing the hypervisors to reboot.
> This can be particularly annoying when you have different kinds of primary storages in
place which are working fine (people running CEPH etc).
> Having to wait for the affected HV which triggered this to come back and declare it's
not running VMs is a bad idea; this HV could require hours or days of maintenance!
> This is embarrassing. How can we fix it? Ideas, suggestions? How are other hypervisors
doing it?
> Let's discuss, test, implement. :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message