cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcus Sorensen <shadow...@gmail.com>
Subject Re: Supporting SolidFire QoS Before 4.2
Date Fri, 08 Feb 2013 00:59:30 GMT
On Thu, Feb 7, 2013 at 5:49 PM, Marcus Sorensen <shadowsor@gmail.com> wrote:
>
> On Feb 7, 2013 5:20 PM, "Alex Huang" <Alex.Huang@citrix.com> wrote:
>>
>>
>>
>> > -----Original Message-----
>> > From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>> > Sent: Thursday, February 07, 2013 3:29 PM
>> > To: Mike Tutkowski
>> > Cc: Edison Su; cloudstack-dev@incubator.apache.org
>> > Subject: Re: Supporting SolidFire QoS Before 4.2
>> >
>> > He's saying that the VM can connect via iscsi directly to the
>> > solidfire device, rather than the host. You'd lose more performance
>> > that way and there's more overhead, but it would be a way to give
>> > individual VMs their own solidfire LUN.
>> >
>> Marcus,
>>
>> I'm interested in your comment here.  Why do you think vm having direct
>> iscsi access actually lose performance?  I would think it's actually faster
>> because there's nothing translating the raw LUN into a raw disk.
>
> With iscsi to the host you have hardware nic and iscsi initiator software
> running on hardware CPU. Then disk is attached to VM.
>
> In VM you have paravirtualized nic (overhead) and iscsi initiator on virtual
> CPU (overhead). Virtual NICs are pretty fast these days but eat lots of CPU
> in doing so. I can easily eat a core on the host doing 3-4 gbit steady to a
> VM. The hardware nic optimizations designed to get around this are still
> unusable for cloud because they tie the VM to the hardware and disable live
> migration (io-srv).
>
> I see what you're saying, that the overhead of running the initiator on a
> vcpu and over a vnic is less than attaching a local disk to a VM, but from
> what I've seen that hasn't been the case.
>
> Then there's the idea of wanting the VM on a 1gbit pub connection, but maybe
> your storage is on a 10g private net. That's common, but it could be
> engineered around I suppose.

I should qualify this by saying that I haven't perf tested running
iscsi inside a Xen VM, only KVM and VMware. I can't really speak about
dom0 emulating devices, but my impression was that dom0 had direct
hardware access, and you wouldn't get hit by a double whammy of
sharing an emulated dom0 device into another dom.

>
>>
>> --Alex

Mime
View raw message