cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcus Sorensen <shadow...@gmail.com>
Subject [DISCUSS] getting rid of KVM patchdisk
Date Sun, 03 Mar 2013 20:12:31 GMT
For those who don't know (this probably doesn't matter, but...), when
KVM brings up a system VM, it creates a 'patchdisk' on primary
storage. This patchdisk is used to pass along 1) the authorized_keys
file and 2) a 'cmdline' file that describes to the systemvm startup
services all of the various properties of the system vm.

Example cmdline file:

 template=domP type=secstorage host=172.17.10.10 port=8250 name=s-1-VM
zone=1 pod=1 guid=s-1-VM
resource=com.cloud.storage.resource.NfsSecondaryStorageResource
instance=SecStorage sslcopy=true role=templateProcessor mtu=1500
eth2ip=192.168.100.170 eth2mask=255.255.255.0 gateway=192.168.100.1
public.network.device=eth2 eth0ip=169.254.1.46 eth0mask=255.255.0.0
eth1ip=172.17.10.150 eth1mask=255.255.255.0 mgmtcidr=172.17.10.0/24
localgw=172.17.10.1 private.network.device=eth1 eth3ip=172.17.10.192
eth3mask=255.255.255.0 storageip=172.17.10.192
storagenetmask=255.255.255.0 storagegateway=172.17.10.1
internaldns1=8.8.4.4 dns1=8.8.8.8

This patch disk has been bugging me for awhile, as it creates a volume
that isn't really tracked anywhere or known about in cloudstack's
database. Up until recently these would just litter the KVM primary
storages, but there's been some triage done to attempt to clean them
up when the system vms go away. It's not perfect. It also can be
inefficient for certain primary storage types, for example if you end
up creating a bunch of 10MB luns on a SAN for these.

So my question goes to those who have been working on the system vm.
My first preference (aside from a full system vm redesign, perhaps
something that is controlled via an API) would be to copy these up to
the system vm via SCP or something. But the cloud services start so
early on that this isn't possible. Next would be to inject them into
the system vm's root disk before starting the server, but if we're
allowing people to make their own system vms, can we count on the
partitions being what we expect? Also I don't think this will work for
RBD, which qemu directly connects to, with the host OS unaware of any
disk.

Options?

Mime
View raw message