cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hiroaki KAWAI <ka...@stratosphere.co.jp>
Subject Re: [DISCUSS] getting rid of KVM patchdisk
Date Mon, 24 Jun 2013 03:35:07 GMT
No I don't think we need to make such effort (sending emails) for devs,
I think we should fix the code itself (and comments in the codes)
because we're devs.

(2013/06/24 12:20), Marcus Sorensen wrote:
> I personally thought it had been publicized pretty well on various threads
> that there is a new system vm for master/4.2, but if you were unaware of
> it, do you think more needs to be done to call it out and make it known to
> the devs working on it?
> On Jun 23, 2013 8:33 PM, "Hiroaki KAWAI" <kawai@stratosphere.co.jp> wrote:
>
>> Current patch/systemvm/debian is based on debian squeeze,
>> which kernel is 2.6.32-5-686-bigmem. In that system vm,
>> cloud-early-config silently fails :
>> /etc/init.d/cloud-early-**config: line 109: /dev/vport0p1: No such file
>> or directory
>> So I've upgraded to wheezy (which includes virtio-console.ko)
>> # I pushed some patch for this.
>>
>> I think we need to ANNOUNCE the incompatibility of this,
>> and hopfuly give some upgrade paths for cloudstack users.
>>
>>
>> (2013/03/05 7:24), Marcus Sorensen wrote:
>>
>>> I think this just requires an updated system vm (the virtio-serial
>>> portion). I've played a bit with the old debian 2.6.32-5-686-bigmem
>>> one and can't get the device nodes to show up, even though the
>>> /boot/config shows that it has CONFIG_VIRTIO_CONSOLE=y. However, if I
>>> try this with a CentOS 6.3 VM, on a CentOS 6.3 or Ubuntu 12.04 KVM
>>> host it works. So I'm not sure what's being used for the ipv6 update,
>>> but we can probably make one that works. We'll need to install qemu-ga
>>> and start it within the systemvm as well.
>>>
>>> On Mon, Mar 4, 2013 at 12:41 PM, Edison Su <Edison.su@citrix.com> wrote:
>>>
>>>>
>>>>
>>>>   -----Original Message-----
>>>>> From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>>>>> Sent: Sunday, March 03, 2013 12:13 PM
>>>>> To: cloudstack-dev@incubator.**apache.org<cloudstack-dev@incubator.apache.org>
>>>>> Subject: [DISCUSS] getting rid of KVM patchdisk
>>>>>
>>>>> For those who don't know (this probably doesn't matter, but...), when
>>>>> KVM
>>>>> brings up a system VM, it creates a 'patchdisk' on primary storage. This
>>>>> patchdisk is used to pass along 1) the authorized_keys file and 2) a
>>>>> 'cmdline'
>>>>> file that describes to the systemvm startup services all of the various
>>>>> properties of the system vm.
>>>>>
>>>>> Example cmdline file:
>>>>>
>>>>>    template=domP type=secstorage host=172.17.10.10 port=8250 name=s-1-
>>>>> VM
>>>>> zone=1 pod=1 guid=s-1-VM
>>>>> resource=com.cloud.storage.**resource.**NfsSecondaryStorageResource
>>>>> instance=SecStorage sslcopy=true role=templateProcessor mtu=1500
>>>>> eth2ip=192.168.100.170 eth2mask=255.255.255.0 gateway=192.168.100.1
>>>>> public.network.device=eth2 eth0ip=169.254.1.46 eth0mask=255.255.0.0
>>>>> eth1ip=172.17.10.150 eth1mask=255.255.255.0 mgmtcidr=172.17.10.0/24
>>>>> localgw=172.17.10.1 private.network.device=eth1 eth3ip=172.17.10.192
>>>>> eth3mask=255.255.255.0 storageip=172.17.10.192
>>>>> storagenetmask=255.255.255.0 storagegateway=172.17.10.1
>>>>> internaldns1=8.8.4.4 dns1=8.8.8.8
>>>>>
>>>>> This patch disk has been bugging me for awhile, as it creates a volume
>>>>> that
>>>>> isn't really tracked anywhere or known about in cloudstack's database.
>>>>> Up
>>>>> until recently these would just litter the KVM primary storages, but
>>>>> there's
>>>>> been some triage done to attempt to clean them up when the system vms
>>>>> go away. It's not perfect. It also can be inefficient for certain
>>>>> primary storage
>>>>> types, for example if you end up creating a bunch of 10MB luns on a SAN
>>>>> for
>>>>> these.
>>>>>
>>>>> So my question goes to those who have been working on the system vm.
>>>>> My first preference (aside from a full system vm redesign, perhaps
>>>>> something that is controlled via an API) would be to copy these up to
>>>>> the
>>>>> system vm via SCP or something. But the cloud services start so early
>>>>> on that
>>>>> this isn't possible. Next would be to inject them into the system vm's
>>>>> root
>>>>> disk before starting the server, but if we're allowing people to make
>>>>> their
>>>>> own system vms, can we count on the partitions being what we expect?
>>>>> Also
>>>>> I don't think this will work for RBD, which qemu directly connects to,
>>>>> with the
>>>>> host OS unaware of any disk.
>>>>>
>>>>> Options?
>>>>>
>>>>
>>>> Could you take a look at the status of this projects in KVM?
>>>> http://wiki.qemu.org/Features/**QAPI/GuestAgent<http://wiki.qemu.org/Features/QAPI/GuestAgent>
>>>> https://fedoraproject.org/**wiki/Features/VirtioSerial<https://fedoraproject.org/wiki/Features/VirtioSerial>
>>>>
>>>> Basically, we need a way to talk to guest VM(sending parameters to KVM
>>>> guest) after VM is booted up. Both VMware/Xenserver has its own way to send
>>>> parameters to guest VM through PV driver, but there is no such thing for
>>>> KVM few years ago.
>>>>
>>>
>>
>


Mime
View raw message