Return-Path: X-Original-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 62FD2DF2C for ; Mon, 22 Oct 2012 23:45:05 +0000 (UTC) Received: (qmail 24915 invoked by uid 500); 22 Oct 2012 23:45:05 -0000 Delivered-To: apmail-incubator-cloudstack-users-archive@incubator.apache.org Received: (qmail 24882 invoked by uid 500); 22 Oct 2012 23:45:05 -0000 Mailing-List: contact cloudstack-users-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-users@incubator.apache.org Delivered-To: mailing list cloudstack-users@incubator.apache.org Received: (qmail 24874 invoked by uid 99); 22 Oct 2012 23:45:05 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Oct 2012 23:45:05 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=5.0 tests=RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [208.97.132.74] (HELO homiemail-a36.g.dreamhost.com) (208.97.132.74) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Oct 2012 23:44:58 +0000 Received: from homiemail-a36.g.dreamhost.com (localhost [127.0.0.1]) by homiemail-a36.g.dreamhost.com (Postfix) with ESMTP id 0548777806E for ; Mon, 22 Oct 2012 16:44:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=nfinausa.com; h=message-id :date:from:mime-version:to:subject:references:in-reply-to :content-type:content-transfer-encoding; s=nfinausa.com; bh=8Tgy fl+e0FIVpT5dboxdFa8UmQo=; b=R+784NWvEIi5MxymDrPYbvikarEvkyhF9DZp ISHa3BJTWrWaJp7ozYKqTryLXI7V7h4tl9Jn8sydizWh2rTwaXU/AuDpu8DhsEQD gZB1RBfOzqtTppKgJwuMCpFqKflVw92028BpOzzDcXttHk9wm64AT/F9vS6hK0wo lHj5fqk= Received: from [172.16.1.99] (adsl-98-90-195-231.mob.bellsouth.net [98.90.195.231]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: nik.martin@nfinausa.com) by homiemail-a36.g.dreamhost.com (Postfix) with ESMTPSA id BEDEE77801F for ; Mon, 22 Oct 2012 16:44:35 -0700 (PDT) Message-ID: <5085DA62.5080505@nfinausa.com> Date: Mon, 22 Oct 2012 18:44:34 -0500 From: Nik Martin Organization: Nfina Technologies, Inc. User-Agent: Mozilla/5.0 (X11; Linux i686; rv:16.0) Gecko/20121011 Thunderbird/16.0.1 MIME-Version: 1.0 To: cloudstack-users@incubator.apache.org Subject: Re: Primary Storage References: <50856526.50004@nfinausa.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org On 10/22/2012 05:49 PM, Trevor Francis wrote: > ZFS looks really interesting to me and I am leaning that way. I am > considering using FreeNAS, as people seem to be having good luck with > it. Can anyone weigh in here? > My personal opinion, I think FreeNAS and OpenFiler have horrible, horrible User Interfaces - not very intuitive, and they both seem to be file servers with things like iSCSI targets tacked on as an afterthought. Nik > > Trevor Francis > Partner > 46 Labs | The PeerEdge Cloud > http://www.46labs.com | http://www.peeredge.net > > 405-362-0046 - Voice | 405-410-4980 - Cell > trevorgfrancis - Skype > trevor@46labs.com > Solutions Provider for the Telecom Industry > > > > On Oct 22, 2012, at 2:30 PM, Jason Davis wrote: > >> ZFS would be an interesting setup as you can do the cache pools like you >> would do in CacheCade. The problem with ZFS or CacheCade+DRBD is that they >> really don't scale out well if you are looking for something with a >> unified >> name space. I'll say however that ZFS is a battle hardened FS with tons of >> shops using it. A lot of the whiz-bang SSD+SATA disk SAN things these >> smaller start up companies are hocking are just ZFS appliances. >> >> RBD looks interesting but I'm not sure if I would be willing to put >> production data on it, I'm not sure how performant it is IRL. From a >> purely technical perspective, it looks REALLY cool. >> >> I suppose anything is fast if you put SSDs in it :) GlusterFS is another >> option although historically small/random IO has not been it's strong >> point. >> >> If you are ok spending money on software and want a scale out block >> storage >> then you might want to consider HP LeftHand's VSA product. I am personally >> partial to NFS plays:) I went the exact opposite approach and settled on >> Isilon for our primary storage for our CS deployment. >> >> >> >> >> On Mon, Oct 22, 2012 at 10:24 AM, Nik Martin > >wrote: >> >>> On 10/22/2012 10:16 AM, Trevor Francis wrote: >>> >>>> We are looking at building a Primary Storage solution for an >>>> enterprise/carrier class application. However, we want to build it using >>>> a FOSS solution and not a commercial solution. Do you have a >>>> recommendation on platform? >>>> >>>> >>> Trevor, >>> >>> I got EXCELLENT results builing a SAN from FOSS using: >>> OS: Centos >>> Hardware: 2X storage servers, with 12x2TB 3.5 SATA drives. LSI MegaRAID >>> with CacheCade Pro, with 240 GB Intel 520 SSDs configured to do SSD >>> caching >>> (alternately, look at FlashCache from Facebook) >>> intel 10GB dual port nics, one port for crossover, on port for up link to >>> storage network >>> >>> DRBD for real time block replication to active-active >>> Pacemaker+corosync for HA Resource management >>> tgtd for iSCSI target >>> >>> If you want file backed storage, XFS is a very good filesystem on Linux >>> now. >>> >>> Pacemaker+Corosync can be difficult to grok at the beginning, but that >>> setup gave me a VERY high performance SAN. The downside is it is >>> entirely >>> managed by CLI, no UI whatsoever. >>> >>> >>>> Trevor Francis >>>> Partner >>>> 46 Labs | The PeerEdge Cloud >>>> http://www.46labs.com | http://www.peeredge.net >>>> >>>> >>>> 405-362-0046 - Voice | 405-410-4980 - Cell >>>> trevorgfrancis - Skype >>>> trevor@46labs.com >>>> >>>> Solutions Provider for the Telecom Industry >>>> >>>> >< >>>> http://www.twitter.**com/peeredge >< >>>> http://www.**twitter.com/peeredge >< >>>> http://**www.facebook.com/PeerEdge > >>>> >>>> >>>