cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Laszlo Hornyak <>
Subject Re: Adding VirtIO SCSI to KVM hypervisors
Date Sat, 21 Jan 2017 09:46:37 GMT
Hi Wido,

If I understand correctly from the documentation and your examples, virtio
provides virtio interface to the guest while virtio-scsi provides scsi
interface, therefore an IaaS service should not replace it without user
request / approval. It would be probably better to let the user set what
kind of IO interface the VM needs.

Best regards,

On Fri, Jan 20, 2017 at 10:21 PM, Wido den Hollander <> wrote:

> Hi,
> VirtIO SCSI [0] has been supported a while now by Linux and all kernels,
> but inside CloudStack we are not using it. There is a issue for this [1].
> It would bring more (theoretical) performance to VMs, but one of the
> motivators (for me) is that we can support TRIM/DISCARD [2].
> This would allow for RBD images on Ceph to shrink, but it can also give
> back free space on QCOW2 images if quests run fstrim. Something all modern
> distributions all do weekly in a CRON.
> Now, it is simple to swap VirtIO for VirtIO SCSI. This would however mean
> that disks inside VMs are then called /dev/sdX instead of /dev/vdX.
> For GRUB and such this is no problems. This usually work on UUIDs and/or
> labels, but for static mounts on /dev/vdb1 for example things break.
> We currently don't have any configuration method on how we want to present
> a disk to a guest, so when attaching a volume we can't say that we want to
> use a different driver. If we think that a Operating System supports VirtIO
> we use that driver in KVM.
> Any suggestion on how to add VirtIO SCSI support?
> Wido
> [0]:
> [1]:
> [2]:



  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message