cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jason Davis <scr...@gmail.com>
Subject Re: iSCSI or NFS
Date Fri, 27 Jul 2012 16:51:44 GMT
I'll weigh in as well... I'll state that I am squarely in the NFS camp :)

Our initial deployment of CS used XS for compute (Started with 2 nodes, was
up to 6 when I left) and used an Equallogic PS6000 (cute, 4TB unit).

We immediately ran into storage issues as LVM over iSCSI was not thin
provisioning. Each time a template was added and then a VM was deployed it
would take/allocate the entire disk size allocated. Once we go up to 60 or
70 VM instances, we had more-a-less exhausted all of our primary storage.
For our workloads, we were pretty light random IO wise... on average only
sustaining 200-300 IOPs through out the workday.

At that point, we were looking for a more unified storage solution for not
only our private cloud but also for our software build and file services.
We needed something distributed and resilient and ended up with a 3 node
X200 series Isilon cluster. We carved out a 10TB NFS export and presented
that to our XS cluster.

Performance from our testing was fairly similar. Undoubtedly, NFS is more
prone to latency than something like iSCSI or FC but for
our development and testing stuff on CS it really did not matter at all.
Like mentioned earlier, NFS is thin provisioned, so we saw MASSIVE storage
capacity savings with new VM deployments from template as CS by default
deploys VMs using CoW (linked clones is also a word to describe this)

We did some performance testing comparing our private cloud vs AWS for the
benefit of "OMG AWS DISK PERFORMANCE IS AWESOME" user contingency and
we definitely were much better performance vs basic EBS storage volumes
(sans the crazy "take 5 1TB EBS volumes and setup a RAID1" senario).

However, I will say that the comparison above really isn't all
that relevant :) Like comparing apples to oranges.






On Fri, Jul 27, 2012 at 9:16 AM, Nik Martin <nik.martin@nfinausa.com> wrote:

> On 07/27/2012 07:55 AM, David Nalley wrote:
>
>> On Tue, Jul 24, 2012 at 4:30 AM, Vladimir Melnik <v.melnik@uplink.ua>
>> wrote:
>>
>>> Good day!
>>>
>>>
>>>
>>> What will you recommend to use as primary storage based on Linux server?
>>> iSCSI or NFS?
>>>
>>>
>>>
>>> Am I right that NFS will be too slow for keeping VM-images?
>>>
>>>
>>>
>>> Thanks in advance!
>>>
>>>
>>>
>>
>> So while my inner storage geek really likes iSCSI and there are lots
>> of performance tuning that you can do, I don't find that most people
>> are willing to do that level of tuning, and until the advent of 10GE I
>> am not sure that it really mattered, as network bandwidth tended to be
>> the limiting factor for arrays of a given size.
>>
>> There have also been a number of NFS vs iSCSI performance reports done
>> for various hypervisors, (google for 'NFS vs iSCSI +
>> $hypervisor_of_choice') and they typically show that iSCSI is
>> marginally faster on average, but the overhead isn't worth it. So if
>> you are a large iSCSI shop already, feel free to use it, but NFS is
>> typically a lot easier to setup, and gets you 95% of the speed benefit
>> IMO.
>>
>> --David
>>
>>
>
>
> I am doing this comparison as we speak, and was led to believe by Citrix
> that NFS was the storage method of choice for Cloudstack  Once I got into
> it, what I realized is that NFS creates less support issues for Citrix, and
> that is probably the main reason for the recommendation.  I so far have
> found that NFS creates a far higher processor load on the storage unit than
> iSCSI, such that I can saturate a quad core xeon with 1 instance of
> bonnie++ in a VM.  With iSCSI, that workload is distributed across the
> Hypervisors, as the HVs and VMs are doing the file system metadata
> processing, not the storage unit.  On the SAN/NAS itself, with iSCSI you
> are just doing block level storage and if the network card does TCP
> offload, or is an iSCSI HBA, then you reduce the cpu load even more.  I
> will be rebuilding my NFS volume as a straight LVM Logical Volume, then
> creating an iSCSI target on that, and will be able to compare and will
> reply with my results.
>
> --
> Regards,
>
> Nik
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message