Return-Path: X-Original-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6CEEEDFD8 for ; Tue, 23 Oct 2012 00:09:43 +0000 (UTC) Received: (qmail 82704 invoked by uid 500); 23 Oct 2012 00:09:43 -0000 Delivered-To: apmail-incubator-cloudstack-users-archive@incubator.apache.org Received: (qmail 82677 invoked by uid 500); 23 Oct 2012 00:09:43 -0000 Mailing-List: contact cloudstack-users-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-users@incubator.apache.org Delivered-To: mailing list cloudstack-users@incubator.apache.org Received: (qmail 82667 invoked by uid 99); 23 Oct 2012 00:09:43 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Oct 2012 00:09:43 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of ivanoch@gmail.com designates 209.85.210.175 as permitted sender) Received: from [209.85.210.175] (HELO mail-ia0-f175.google.com) (209.85.210.175) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Oct 2012 00:09:38 +0000 Received: by mail-ia0-f175.google.com with SMTP id b35so2507250iac.6 for ; Mon, 22 Oct 2012 17:09:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=esdJrvZGVQPBEY1lkRUzWkiz4eBdTfMI5moUZp8bhXQ=; b=XXa9rhcINHfK0CtVFxr2rvpqY2iJkXbR+Ocxenl3yDgkNEr2GRZYh+nurbhnBVOToQ nXGOzik0wm190mfuu/7Y1PdFSdALwk2ajI3xcIHmmkUFwzWK2+1t0bxGP7Izhk4QA0oN Ko5dcD8+px5UyIdm2VUw9znFfzduGav05BaUBqBflIX9yDLgMADr93p+ss30Xy3XcY9C XMpVraNixVJndefO1YDvFtlgFetXc309cMZCo7M2XkGuBga0Xe+tKkBahX/eqIzxpFSl iznDLqv/X5RJ0mAdPytD3nOs2A/uk6pUB1CQ04jc78JNQP2nzXjCb/s8D4fNPf0Ezb6H fn8Q== MIME-Version: 1.0 Received: by 10.50.12.138 with SMTP id y10mr17924405igb.58.1350950957895; Mon, 22 Oct 2012 17:09:17 -0700 (PDT) Received: by 10.50.89.197 with HTTP; Mon, 22 Oct 2012 17:09:17 -0700 (PDT) In-Reply-To: References: <50856526.50004@nfinausa.com> <5085DA62.5080505@nfinausa.com> Date: Tue, 23 Oct 2012 11:09:17 +1100 Message-ID: Subject: Re: Primary Storage From: Ivan Rodriguez To: cloudstack-users@incubator.apache.org Content-Type: multipart/alternative; boundary=14dae9340517f4514104ccaec7a2 X-Virus-Checked: Checked by ClamAV on apache.org --14dae9340517f4514104ccaec7a2 Content-Type: text/plain; charset=ISO-8859-1 Solaris 11 ZFS and yes we tried different setups, raids levels number of SSD cache, ARC zfs options etc etc etc. Cheers On Tue, Oct 23, 2012 at 11:05 AM, Outback Dingo wrote: > On Mon, Oct 22, 2012 at 8:03 PM, Ivan Rodriguez wrote: > > We are using ZFS, with jbod, not in production yet exporting NFS to > > cloudstack, I'm not really happy about the performance > > but I think is related to the hardware itself rather than technology, we > > are using intel SR2625UR and Intel 320 SSD, we were evaluating gluster as > > well, but we decided to move away from that path since gluster nfs is > still > > performing poorly, plus we would like to see cloudstack integrating the > > gluster-fuse module, we haven't decided the final storage setup but at > the > > moment we had better results with ZFS. > > > > > > question is whos ZFS and have you "tweaked" the zfs / nfs config for > performance > > > > > On Tue, Oct 23, 2012 at 10:44 AM, Nik Martin >wrote: > > > >> On 10/22/2012 05:49 PM, Trevor Francis wrote: > >> > >>> ZFS looks really interesting to me and I am leaning that way. I am > >>> considering using FreeNAS, as people seem to be having good luck with > >>> it. Can anyone weigh in here? > >>> > >>> > >> My personal opinion, I think FreeNAS and OpenFiler have horrible, > horrible > >> User Interfaces - not very intuitive, and they both seem to be file > servers > >> with things like iSCSI targets tacked on as an afterthought. > >> > >> Nik > >> > >> > >>> Trevor Francis > >>> Partner > >>> 46 Labs | The PeerEdge Cloud > >>> http://www.46labs.com | > http://www.peeredge.net > >>> > >>> 405-362-0046 - Voice | 405-410-4980 - Cell > >>> trevorgfrancis - Skype > >>> trevor@46labs.com > >>> Solutions Provider for the Telecom Industry > >>> > >>> >< > >>> http://www.twitter.**com/peeredge >< > >>> http://www.**twitter.com/peeredge >< > >>> http://**www.facebook.com/PeerEdge > > >>> > >>> On Oct 22, 2012, at 2:30 PM, Jason Davis wrote: > >>> > >>> ZFS would be an interesting setup as you can do the cache pools like > you > >>>> would do in CacheCade. The problem with ZFS or CacheCade+DRBD is that > >>>> they > >>>> really don't scale out well if you are looking for something with a > >>>> unified > >>>> name space. I'll say however that ZFS is a battle hardened FS with > tons > >>>> of > >>>> shops using it. A lot of the whiz-bang SSD+SATA disk SAN things these > >>>> smaller start up companies are hocking are just ZFS appliances. > >>>> > >>>> RBD looks interesting but I'm not sure if I would be willing to put > >>>> production data on it, I'm not sure how performant it is IRL. From a > >>>> purely technical perspective, it looks REALLY cool. > >>>> > >>>> I suppose anything is fast if you put SSDs in it :) GlusterFS is > another > >>>> option although historically small/random IO has not been it's strong > >>>> point. > >>>> > >>>> If you are ok spending money on software and want a scale out block > >>>> storage > >>>> then you might want to consider HP LeftHand's VSA product. I am > >>>> personally > >>>> partial to NFS plays:) I went the exact opposite approach and settled > on > >>>> Isilon for our primary storage for our CS deployment. > >>>> > >>>> > >>>> > >>>> > >>>> On Mon, Oct 22, 2012 at 10:24 AM, Nik Martin >>>> >>wrote: > >>>> > >>>> On 10/22/2012 10:16 AM, Trevor Francis wrote: > >>>>> > >>>>> We are looking at building a Primary Storage solution for an > >>>>>> enterprise/carrier class application. However, we want to build it > >>>>>> using > >>>>>> a FOSS solution and not a commercial solution. Do you have a > >>>>>> recommendation on platform? > >>>>>> > >>>>>> > >>>>>> Trevor, > >>>>> > >>>>> I got EXCELLENT results builing a SAN from FOSS using: > >>>>> OS: Centos > >>>>> Hardware: 2X storage servers, with 12x2TB 3.5 SATA drives. LSI > MegaRAID > >>>>> with CacheCade Pro, with 240 GB Intel 520 SSDs configured to do SSD > >>>>> caching > >>>>> (alternately, look at FlashCache from Facebook) > >>>>> intel 10GB dual port nics, one port for crossover, on port for up > link > >>>>> to > >>>>> storage network > >>>>> > >>>>> DRBD for real time block replication to active-active > >>>>> Pacemaker+corosync for HA Resource management > >>>>> tgtd for iSCSI target > >>>>> > >>>>> If you want file backed storage, XFS is a very good filesystem on > Linux > >>>>> now. > >>>>> > >>>>> Pacemaker+Corosync can be difficult to grok at the beginning, but > that > >>>>> setup gave me a VERY high performance SAN. The downside is it is > >>>>> entirely > >>>>> managed by CLI, no UI whatsoever. > >>>>> > >>>>> > >>>>> Trevor Francis > >>>>>> Partner > >>>>>> 46 Labs | The PeerEdge Cloud > >>>>>> http://www.46labs.com | > >>>>>> http://www.peeredge.net > >>>>>> > >>>>>> > >>>>>> 405-362-0046 - Voice | 405-410-4980 - Cell > >>>>>> trevorgfrancis - Skype > >>>>>> trevor@46labs.com trevor@46labs.com > >>>>>> > > >>>>>> > >>>>>> > >>>>>> Solutions Provider for the Telecom Industry > >>>>>> > >>>>>> http://www.twitter.com/**peeredge>< > >>>>>> http://www.twitter.com/**peeredge >>>< > >>>>>> http://www.twitter.**com/**peeredge < > http://www.twitter.com/**peeredge > >>>>>> >>< > >>>>>> http://www.**twitter.com/**peeredge < > >>>>>> http://www.twitter.com/**peeredge >>>< > >>>>>> http://**www.facebook.com/**PeerEdge< > http://www.facebook.com/PeerEdge>< > >>>>>> http://www.facebook.com/**PeerEdge < > http://www.facebook.com/PeerEdge> > >>>>>> >> > >>>>>> > >>>>>> > >>>>>> > >>>>> > --14dae9340517f4514104ccaec7a2--