cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andreas Huser <ahu...@7five-edv.de>
Subject Re: NFS vs iSCSI
Date Mon, 29 Oct 2012 13:48:28 GMT
Hi Trevor, 

hint check this: http://www.ebay.de/itm/Mellanox-Infiniband-4X-HCA-PCI-E-MHGA28-1TC-7104-HCA-128LPX-/290625782203

Try NFS over RDMA, or IPoIB low latency. then nfs makes fun :) 

so long.... 
Andreas 




----- Urspr√ľngliche Mail -----

Von: "Jason Davis" <scr512@gmail.com> 
An: cloudstack-users@incubator.apache.org 
Gesendet: Montag, 29. Oktober 2012 05:01:55 
Betreff: Re: NFS vs iSCSI 

NFS failover is fine, I ran our Cluster with Isilon storage so we load 
balanced and could failover stupid easy. With my experiences with XS/XCP I 
found NFS much more pleasant to work with vs the iSCSI I did with our 
Equallogic array cluster. 

In any event, try both and see which one you like best... in all honesty 
with 10Gb/s Ethernet it frankly doesn't matter which protocol you go with. 
On Oct 28, 2012 10:53 PM, "Outback Dingo" <outbackdingo@gmail.com> wrote: 

> On Sun, Oct 28, 2012 at 11:50 PM, Jason Davis <scr512@gmail.com> wrote: 
> > Like I was mentioning, for the cut in theoretical performance, you get 
> > something much easier to administer. Plenty of really nice SSD SSD/Disk 
> > arrays do NFS and are blazing fast. 
> 
> Not sure how it figures you think NFS is any easier to manage and 
> support then ISCSI is.... once its configured it just runs. 
> And ISCSI has the potential to do failover, NFS v3 cant really. 
> 
> > 
> > As for over provisioning, just like in KVM you can over provison the hell 
> > out of CPU, especially if the workload your end users will be doing is a 
> > known quantity. As for memory, I wouldn't even bother with memory 
> > ballooning and other provisioning tricks. Memory is so cheap that it's 
> > easier just to add a new hypervisor node once you need more RAM for the 
> > cluster. That and you get more CPU to boot. Good rule off thumb is to 
> never 
> > over provision RAM... much happier end users :) 
> > On Oct 28, 2012 10:15 PM, "Outback Dingo" <outbackdingo@gmail.com> 
> wrote: 
> > 
> >> On Sun, Oct 28, 2012 at 11:11 PM, Trevor Francis 
> >> <trevor.francis@tgrahamcapital.com> wrote: 
> >> > Good question. This is a private cloud for an application we have 
> >> developed. We will have no actual "public" users installing OS' of 
> varying 
> >> ranges. 
> >> > 
> >> > That being said. Cent 6.3 64-bit, is the only guest OS being deployed.

> >> It is also what I am intending to deploy my NFS using. 
> >> > 
> >> 
> >> Then ISCSI would be a good choice is you have speed at the disk layer, 
> >> no sense slowing it down with NFS. 
> >> 
> >> > Yes, I know that ZFS rocks and FreeBSD is the bees knees, but we know 
> >> Cent and everything on our platform is standardized around that (short 
> of 
> >> XenServer hosts). Also, we don't need to take advantage of ZFS caching, 
> as 
> >> all of our deployed storage for guests is SSD anyway. 
> >> > 
> >> > Thanks! 
> >> > 
> >> > TGF 
> >> > 
> >> > 
> >> > 
> >> > 
> >> > On Oct 28, 2012, at 9:56 PM, Jason Davis <scr512@gmail.com> wrote:

> >> > 
> >> >> Decent read: 
> >> >> http://lass.cs.umass.edu/papers/pdf/FAST04.pdf 
> >> >> 
> >> >> As far as CS + XenServer, I prefer NFS. Easier to manage, thin 
> >> provisioning 
> >> >> works from the get go (which is super important as XenServer uses CoW

> >> >> (linked clones) iterations from the template you use.) By default,
XS 
> >> uses 
> >> >> LVM over iSCSI with iSCSI which can be confusing to administer. That

> >> and it 
> >> >> doesn't thin provision... which sucks... 
> >> >> 
> >> >> In theory there are latency penalties with NFS (as mentioned in the

> >> paper) 
> >> >> but in a live deployment, I never ran into this. 
> >> >> On Oct 28, 2012 9:03 PM, "Trevor Francis" < 
> >> trevor.francis@tgrahamcapital.com> 
> >> >> wrote: 
> >> >> 
> >> >>> I know this has been discussed on other forums with limited success

> in 
> >> >>> explaining which is best in for aproduction environment, but could

> you 
> >> >>> cloudstackers weigh in which storage technology would be best for

> both 
> >> >>> primary and secondary storage for VMs running on Xenserver? Both
are 
> >> pretty 
> >> >>> trivial to setup with NFS being the easiest. 
> >> >>> 
> >> >>> Thanks, 
> >> >>> 
> >> >>> Trevor Francis 
> >> >>> 
> >> >>> 
> >> >>> 
> >> > 
> >> 
> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message