Return-Path: X-Original-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1B61BD99D for ; Mon, 29 Oct 2012 04:02:25 +0000 (UTC) Received: (qmail 60877 invoked by uid 500); 29 Oct 2012 04:02:24 -0000 Delivered-To: apmail-incubator-cloudstack-users-archive@incubator.apache.org Received: (qmail 60826 invoked by uid 500); 29 Oct 2012 04:02:24 -0000 Mailing-List: contact cloudstack-users-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-users@incubator.apache.org Delivered-To: mailing list cloudstack-users@incubator.apache.org Received: (qmail 60085 invoked by uid 99); 29 Oct 2012 04:02:22 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 29 Oct 2012 04:02:22 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of scr512@gmail.com designates 74.125.83.47 as permitted sender) Received: from [74.125.83.47] (HELO mail-ee0-f47.google.com) (74.125.83.47) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 29 Oct 2012 04:02:16 +0000 Received: by mail-ee0-f47.google.com with SMTP id t10so1851513eei.6 for ; Sun, 28 Oct 2012 21:01:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=K8g9QSE9WkD6ughcuwdCUaCrZVz8ojmtokrfwP/jGjU=; b=jPJclzPJTLwbhewRbl29OCuY9V2YO1oqg3767D8dhNkohc5MjcEK04uFRZeFXBptmK A4E696hE4UIEduxroA2iqHayo2fJHfunglPy4u+X37F3GbyjbiOS15d9mmnKHFcHLocI RMb7Jhqr6/A6Srfuw77pulc23vC/J/D63g7kxIGqNg25BCPA5sIb62nzUx2YtaUx+SZ9 WNPuocLUk4T/3+aa54DBDizFVPsFnUhuzruX8nhlYI/60bRM36H8qkNCNBb3tEtgaMcW zXky0MW8vmLSFBNHva34xBDE07B5TR9nxo+95qG1iMNHZhtHcwxeHJ0c4zrgLOe7ePkD cBWg== MIME-Version: 1.0 Received: by 10.14.179.69 with SMTP id g45mr47238255eem.42.1351483315926; Sun, 28 Oct 2012 21:01:55 -0700 (PDT) Received: by 10.14.179.136 with HTTP; Sun, 28 Oct 2012 21:01:55 -0700 (PDT) Received: by 10.14.179.136 with HTTP; Sun, 28 Oct 2012 21:01:55 -0700 (PDT) In-Reply-To: References: <313EE61F-160E-406F-9129-4498C7116433@tgrahamcapital.com> <71F5CF1E-6269-478D-836E-B0B3B8B6C8F1@tgrahamcapital.com> Date: Sun, 28 Oct 2012 23:01:55 -0500 Message-ID: Subject: Re: NFS vs iSCSI From: Jason Davis To: cloudstack-users@incubator.apache.org Content-Type: multipart/alternative; boundary=047d7b62243cf7366404cd2aba41 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b62243cf7366404cd2aba41 Content-Type: text/plain; charset=ISO-8859-1 NFS failover is fine, I ran our Cluster with Isilon storage so we load balanced and could failover stupid easy. With my experiences with XS/XCP I found NFS much more pleasant to work with vs the iSCSI I did with our Equallogic array cluster. In any event, try both and see which one you like best... in all honesty with 10Gb/s Ethernet it frankly doesn't matter which protocol you go with. On Oct 28, 2012 10:53 PM, "Outback Dingo" wrote: > On Sun, Oct 28, 2012 at 11:50 PM, Jason Davis wrote: > > Like I was mentioning, for the cut in theoretical performance, you get > > something much easier to administer. Plenty of really nice SSD SSD/Disk > > arrays do NFS and are blazing fast. > > Not sure how it figures you think NFS is any easier to manage and > support then ISCSI is.... once its configured it just runs. > And ISCSI has the potential to do failover, NFS v3 cant really. > > > > > As for over provisioning, just like in KVM you can over provison the hell > > out of CPU, especially if the workload your end users will be doing is a > > known quantity. As for memory, I wouldn't even bother with memory > > ballooning and other provisioning tricks. Memory is so cheap that it's > > easier just to add a new hypervisor node once you need more RAM for the > > cluster. That and you get more CPU to boot. Good rule off thumb is to > never > > over provision RAM... much happier end users :) > > On Oct 28, 2012 10:15 PM, "Outback Dingo" > wrote: > > > >> On Sun, Oct 28, 2012 at 11:11 PM, Trevor Francis > >> wrote: > >> > Good question. This is a private cloud for an application we have > >> developed. We will have no actual "public" users installing OS' of > varying > >> ranges. > >> > > >> > That being said. Cent 6.3 64-bit, is the only guest OS being deployed. > >> It is also what I am intending to deploy my NFS using. > >> > > >> > >> Then ISCSI would be a good choice is you have speed at the disk layer, > >> no sense slowing it down with NFS. > >> > >> > Yes, I know that ZFS rocks and FreeBSD is the bees knees, but we know > >> Cent and everything on our platform is standardized around that (short > of > >> XenServer hosts). Also, we don't need to take advantage of ZFS caching, > as > >> all of our deployed storage for guests is SSD anyway. > >> > > >> > Thanks! > >> > > >> > TGF > >> > > >> > > >> > > >> > > >> > On Oct 28, 2012, at 9:56 PM, Jason Davis wrote: > >> > > >> >> Decent read: > >> >> http://lass.cs.umass.edu/papers/pdf/FAST04.pdf > >> >> > >> >> As far as CS + XenServer, I prefer NFS. Easier to manage, thin > >> provisioning > >> >> works from the get go (which is super important as XenServer uses CoW > >> >> (linked clones) iterations from the template you use.) By default, XS > >> uses > >> >> LVM over iSCSI with iSCSI which can be confusing to administer. That > >> and it > >> >> doesn't thin provision... which sucks... > >> >> > >> >> In theory there are latency penalties with NFS (as mentioned in the > >> paper) > >> >> but in a live deployment, I never ran into this. > >> >> On Oct 28, 2012 9:03 PM, "Trevor Francis" < > >> trevor.francis@tgrahamcapital.com> > >> >> wrote: > >> >> > >> >>> I know this has been discussed on other forums with limited success > in > >> >>> explaining which is best in for aproduction environment, but could > you > >> >>> cloudstackers weigh in which storage technology would be best for > both > >> >>> primary and secondary storage for VMs running on Xenserver? Both are > >> pretty > >> >>> trivial to setup with NFS being the easiest. > >> >>> > >> >>> Thanks, > >> >>> > >> >>> Trevor Francis > >> >>> > >> >>> > >> >>> > >> > > >> > --047d7b62243cf7366404cd2aba41--