cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pawit Pornkitprasan <p.pa...@gmail.com>
Subject Re: PCI-Passthrough with CloudStack
Date Wed, 12 Jun 2013 01:47:51 GMT
On Tue, Jun 11, 2013 at 8:26 PM, Vijayendra Bhamidipati
<vijayendra.bhamidipati@citrix.com> wrote:

> -----Original Message-----
> From: David Nalley [mailto:david@gnsa.us]
> Sent: Tuesday, June 11, 2013 5:08 AM
> To: dev@cloudstack.apache.org
> Cc: Ryousei Takano
> Subject: Re: PCI-Passthrough with CloudStack
>
> [Vijay>] Any specific reasons for not tracking the type of device? Different hypervisors
may implement passthrough differently. KVM may use the PCI ID but afaik vmware does not and
so we probably will need to know the type of device in order to map it as a passthrough device.

I don't think that there is any use to tracking the type of device,
and a PCI device can be any kind of device.

I don't have any direct experience with VMWare, but VMWare
documentation (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010789)
does show that it is recorded using the PCI ID.

> [Vijay>] What is the reason for this limitation? Is it that PCI IDs can change among
PCI devices on a host across reboots? In general, what is the effect of a host reboot on PCI
IDs? Could the PCI ID of the physical device change? Is there a way to configure passthrough
devices without using the PCI ID of the device?

The limitation is to simplify the initial implementation of
allocation. I believe that a PCI ID is constant (unless of course, the
PCI card is physically moved inside the server). A PCI ID will always
have to be specified somewhere, whether it is in the management or
agent.

> This looks like a compelling idea, though I am sure not limited to just networking (think
GPU passthrough).
> How are things like live migration affected? Are you making planner changes to deal with
the limiting factor of a single PCI-passthrough VM being available per host?

So far, I've made the change to FirstFitAllocator so that it only
assigns one VM with PCI Passthrough to one host. I'm looking to making
it smarter though. (Like what Edison Su suggested)

> What's the level of effort to extend this to work with VMware DirectPath I/O and PCI
passthrough on XenServer?

I don't have much experience with VMware or XenServer, so I am not
sure. I am actually doing this as an internship project, so my scope
is likely limited to KVM.

> [Vijay>] It's probably a good idea to limit the passthrough to networking to begin
with and implement other types of devices (HBA/CD-ROMs etc) incrementally. Live migration
will definitely be affected. In vmware, live migration is disabled for a VM once the VM is
configured with a passthrough device. The implementation should handle this. A host of other
features also get disabled when passthrough is configured, and if cloudstack is using any
of those, we should handle those paths as well.

With KVM, libvirt prevents live migration for machines with PCI
Passthrough enabled. The errors goes back up the stack and the UI
"correctly" displays the error message "Failed to migrate vm".

>
> Regards,
> Vijay
> --David

Best Regards,
Pawit

Mime
View raw message