cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcus <shadow...@gmail.com>
Subject Re: CentOS KVM systemvm issue
Date Fri, 12 Sep 2014 15:36:43 GMT
Can you provide more info? Is the host running CentOS 6.x, or is your
systemvm? What is rebooted, the host or the router, and how is it rebooted?
 We have what sounds like the same config (CentOS 6.x hosts, stock
community provided systemvm), and are running thousands of virtual routers,
rebooted regularly with no issue (both hosts and virtual routers).  One
setting we may have that you may not is that our system vms are rebuilt
from scratch on every reboot (recreate.systemvm.enabled=true in global
settings), not that I expect this to be the problem, but might be something
to look at.

On Fri, Sep 12, 2014 at 8:49 AM, John Skinner <john.skinner@appcore.com>
wrote:

> I have found that on CloudStack 4.2 + (when we changed to using the
> virtio-socket to send data to the systemvm) when running CentOS 6.X
> cloud-early-config fails. On new systemvm creation there is a high chance
> for success, but still a chance for failure. After the systemvm has been
> created a simple reboot will cause start to fail every time. This has been
> confirmed on 2 separate CloudStack 4.2 environments; 1 running CentOS 6.3
> KVM, and another running CentOS 6.2 KVM. This can be fixed with a simple
> modification to the get_boot_params function in the cloud-early-config
> script. If you wrap the while read line inside of another while that checks
> if $cmd returns an empty string it fixes the issue.
>
> This is a pretty nasty issue for any one running CloudStack 4.2 + on
> CentOS 6.X
>
> John Skinner
> Appcore

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message