cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Mackey <tmac...@gmail.com>
Subject Re: Support pure Xen as a hypervisor follow-up
Date Fri, 06 Jun 2014 15:17:53 GMT
Dave,

I've submitted a merge review request
(https://reviews.apache.org/r/22270/) yesterday.  If you want to avoid
having to potentially deal with a bunch of conflicts, you might want
to see if your patches apply cleanly there and let me know.  Happy to
help with any conflict resolution.

btw, I didn't see a design document up on the
wiki(https://cwiki.apache.org/confluence/display/CLOUDSTACK/4.5+Design+Documents).
 Can you get one there and start a DISCUSS thread?  It'll probably
tease out some gotchas you might not be aware of.

-tim

On Fri, Jun 6, 2014 at 11:04 AM, Dave Scott <Dave.Scott@citrix.com> wrote:
> Hi,
>
> Here’s a quick status update:
>
> On 16 May 2014, at 15:22, Dave Scott <Dave.Scott@citrix.com> wrote:
>
>> Hi,
>>
>> On 14 May 2014, at 09:53, sebgoa <runseb@gmail.com> wrote:
>>
>>>
>>> On Apr 9, 2014, at 2:37 PM, Dave Scott <Dave.Scott@citrix.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Following up from Tim's "Support pure Xen as a hypervisor" proposal last
month[1] I'd like to start working on this and maybe even make a little bit of progress while
I'm at CCC in Denver.
>>>>
>>>> Helpfully James Bulpin managed to get CS + libvirt + xen to start an instance
in a simple configuration. Although the patches[2] are not intended to be production-ready
:) they help highlight some of the areas we need to change.
>>>
>>> Dave, just to let you know that Tim has done some important "refactoring" to
split up XenServer hypervisor in CS between Xen and XenServer. That way we could keep using
xapi for XS but start moving to libvirt for Xen.
>>>
>>> Tim worked in the xen2server branch (don't ask about the name, I messed it up…:)
).
>>>
>>> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/xen2server
>>>
>>> Would be nice to see some of the libvirt stuff in that branch to handle a new
driver for Xen.
>>>
>>> Since the two hypervisors will be split up, we could still drop in some early
libvirt patches to handle Xen and put this in 4.5 as a wip.
>>
>> Thanks for the links.
>>
>> I’m slowly building up a set of patches here:
>>
>> https://github.com/djs55/cloudstack/tree/virsh-capabilities
>>
>> I think once I’ve gotten to a stable-ish point I’ll rebase on top of Tim’s
branch.
>>
>> So far I’ve
>> * changed the hypervisor detection to use ‘virsh capabilities’ in one place and
‘cat /sys/hypervisor/type’ in another. Thinking about it again, I think it’s probably
best to standardise on /sys/hypervisor/type since that will succeed irrespective of whether
the libvirtd service is chkconfig’d on or not.
>>
>> * in the python cloudutils system setup stuff isKvmEnabled() has become isHypervisorEnabled()
>>
>> * added a XenLibvirtDiscoverer similar to the LXC one
>>
>> * fixed what I believe is a race in sshExecuteCmdOneShotWithExitCode (which seems
to hit me every time, I don’t know why other people seem to be immune from it): see CLOUDSTACK-6621
and review board request 21261
>>
>> * added the new hypervisor to hypervisor.list and system.vm.default.hypervisor, so
it appears in the UI properly
>>
>> * registered a system VM template in the database, using the same qcow2 image as
KVM
>>
>> For my test host I’m using a XenServer nightly snapshot which comes with a nice
modern xen and kernel, and is easy to install bleeding-edge libvirt on top. I had to tweak
the kernel configuration and the network configuration but I’m hoping to make it work out
of the box in future.
>>
>> When I deploy my ‘datacenter’ the discovery phase works, the agent connects and
looks healthy in the logs and the UI. The next step is to figure out why the system VM template
isn’t being copied to primary storage — for some reason the copy isn’t being attempted
but I can’t see any obvious reason why.
>
> I’m now at the stage of getting my system VMs to start via libvirt. The main missing
feature is support for <channels>: low-bandwidth private host<->guest control
channels. These channels are generally useful things and are needed by other projects (like
oVirt), so I’d like to add them to libxl and libvirt’s libxl driver. There’s a thread
on xen-devel and libvir-devel if anyone’s interested:
>
> http://www.redhat.com/archives/libvir-list/2014-June/msg00180.html
>
> Once the <channels> are sorted, basic VM operations should work. The next step
would be to rebase my patches on top of Tim’s renaming changes and tidy them up for review.
>
> Cheers,
> Dave
>
>>
>> Cheers,
>> Dave
>>
>>>
>>> -Sebastien
>>>
>>>>
>>>> Some of the areas are:
>>>>
>>>> 1. hypervisor detection
>>>>
>>>> Where we currently look for KVM specifically ("lsmod | grep kvm") we could
switch to either detecting any Linux hypervisor (by reading /sys/hypervisor/type) and assuming
if a hypervisor is present then we can use libvirt on it (is this a fair assumption?) Or we
could white-list “kvm” or “xen”. Or we could query libvirt directly (perhaps via 'virsh
capabilities'?)
>>>>
>>>> 2. fiddling with the domain.xml
>>>>
>>>> When starting a domain via libvirt the XML configuration has hypervisor-specific
stuff in it. Some of this is easy to change like:
>>>>
>>>> <domain type='kvm'>
>>>>
>>>> obviously becomes
>>>>
>>>> <domain type='xen'>
>>>>
>>>> and
>>>>
>>>> <emulator>/usr/libexec/qemu-kvm</emulator>
>>>>
>>>> should probably be
>>>>
>>>> <emulator>/some/other/path/qemu-dm</emulator>
>>>>
>>>> Some is a bit more invasive (to the VM) such as the virtual hardware type
should be switched from "virtio" to "xen" (and the block device in Linux will change from
/dev/vd* to /dev/xvd*) and we'll have to either implement or work around the lack of
>>>>
>>>> <channel type='unix'> ...
>>>>
>>>> -- I presume this is a control channel into the system VM. Perhaps we could
implement this in libvirt/libxl using vchan?
>>>>
>>>> 3. system VMs?
>>>>
>>>> It would be very convenient if the system VM images could work on both xen
and KVM. This is probably doable as long as we don't bake in virtual hardware specific information
(such as /dev/vda) in the image. We could use the qcow2 format in both cases. What do you
think?
>>>>
>>>> … and I’m sure there’s more.
>>>>
>>>> Anyway, feedback would be welcome. If anyone else in Denver wants to chat,
then come grab me later!
>>>>
>>>> Cheers,
>>>> Dave Scott
>>>>
>>>> [1] http://mail-archives.apache.org/mod_mbox/cloudstack-users/201403.mbox/%3cCAJGXtBNbmQTQ81rALgH2kMA7V5WJYZKr3xnyasMKC_br+UKzOw@mail.gmail.com%3e
>>>>
>>>> [2] https://github.com/jamesbulpin/cloudstack/commits/jamesb_xen_exploratory
>

Mime
View raw message