cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vijayendra Bhamidipati <vijayendra.bhamidip...@citrix.com>
Subject RE: PCI-Passthrough with CloudStack
Date Tue, 11 Jun 2013 20:26:48 GMT


-----Original Message-----
From: David Nalley [mailto:david@gnsa.us] 
Sent: Tuesday, June 11, 2013 5:08 AM
To: dev@cloudstack.apache.org
Cc: Ryousei Takano
Subject: Re: PCI-Passthrough with CloudStack

On Tue, Jun 11, 2013 at 3:52 AM, Pawit Pornkitprasan <p.pawit@gmail.com> wrote:
> Hi,
>
> I am implementing PCI-Passthrough to use with CloudStack for use with 
> high-performance networking (10 Gigabit Ethernet/Infiniband).
>
> The current design is to attach a PCI ID (from lspci) to a compute 
> offering. (Not a network offering since from CloudStack's point of 
> view, the pass through device has nothing to do with network and may 
> as well be used for other things.) 

[Vijay>] Any specific reasons for not tracking the type of device? Different hypervisors
may implement passthrough differently. KVM may use the PCI ID but afaik vmware does not and
so we probably will need to know the type of device in order to map it as a passthrough device.

> A host tag can be used to limit 
> deployment to machines with the required PCI device.
>
> Then, when starting the virtual machine, the PCI ID is passed into 
> VirtualMachineTO to the agent (currently using KVM) and the agent 
> creates a corresponding <hostdev> (
> http://libvirt.org/guide/html/Application_Development_Guide-Device_Con
> fig-PCI_Pass.html) tag and then libvirt will handle the rest.
>
> For allocation, the current idea is to use CloudStack's capacity 
> system (at the same place where allocation of CPU and RAM is 
> determined) to limit 1 PCI-Passthrough VM per physical host.
>
> The current design has many limitations such as:
>
>    - One physical host can only have 1 VM with PCI-Passthrough, even if
>    many PCI-cards with equivalent functions are available

[Vijay>] What is the reason for this limitation? Is it that PCI IDs can change among PCI
devices on a host across reboots? In general, what is the effect of a host reboot on PCI IDs?
Could the PCI ID of the physical device change? Is there a way to configure passthrough devices
without using the PCI ID of the device?

>    - The PCI ID is fixed inside the compute offering, so all machines have
>    to be homogeneous and have the same PCI ID for the device.


>
> The initial implementation is working. Any suggestions and comments 
> are welcomed.
>
> Thank you,
> Pawit

This looks like a compelling idea, though I am sure not limited to just networking (think
GPU passthrough).
How are things like live migration affected? Are you making planner changes to deal with the
limiting factor of a single PCI-passthrough VM being available per host?
What's the level of effort to extend this to work with VMware DirectPath I/O and PCI passthrough
on XenServer?

[Vijay>] It's probably a good idea to limit the passthrough to networking to begin with
and implement other types of devices (HBA/CD-ROMs etc) incrementally. Live migration will
definitely be affected. In vmware, live migration is disabled for a VM once the VM is configured
with a passthrough device. The implementation should handle this. A host of other features
also get disabled when passthrough is configured, and if cloudstack is using any of those,
we should handle those paths as well.


Regards,
Vijay

--David

Mime
View raw message