cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ivan Rodriguez <ivan...@gmail.com>
Subject Re: Primary Storage
Date Tue, 23 Oct 2012 00:03:39 GMT
We are using ZFS, with jbod, not in production yet exporting NFS to
cloudstack, I'm not really happy about the performance
but I think is related to the hardware itself rather than technology, we
are using intel SR2625UR and Intel 320 SSD, we were evaluating gluster as
well, but we decided to move away from that path since gluster nfs is still
performing poorly,  plus we would like to see cloudstack integrating the
gluster-fuse module, we haven't decided the final storage setup but at the
moment we had better results with ZFS.



On Tue, Oct 23, 2012 at 10:44 AM, Nik Martin <nik.martin@nfinausa.com>wrote:

> On 10/22/2012 05:49 PM, Trevor Francis wrote:
>
>> ZFS looks really interesting to me and I am leaning that way. I am
>> considering using FreeNAS, as people seem to be having good luck with
>> it. Can anyone weigh in here?
>>
>>
> My personal opinion, I think FreeNAS and OpenFiler have horrible, horrible
> User Interfaces - not very intuitive, and they both seem to be file servers
> with things like iSCSI targets tacked on as an afterthought.
>
> Nik
>
>
>> Trevor Francis
>> Partner
>> 46 Labs | The PeerEdge Cloud
>> http://www.46labs.com <http://www.46labs.com/> | http://www.peeredge.net
>> <http://www.peeredge.net/>
>> 405-362-0046 - Voice  | 405-410-4980 - Cell
>> trevorgfrancis - Skype
>> trevor@46labs.com <mailto:trevor@46labs.com>
>> Solutions Provider for the Telecom Industry
>>
>> <http://www.twitter.com/**peeredge <http://www.twitter.com/peeredge>><
>> http://www.twitter.**com/peeredge <http://www.twitter.com/peeredge>><
>> http://www.**twitter.com/peeredge <http://www.twitter.com/peeredge>><
>> http://**www.facebook.com/PeerEdge <http://www.facebook.com/PeerEdge>>
>>
>> On Oct 22, 2012, at 2:30 PM, Jason Davis wrote:
>>
>>  ZFS would be an interesting setup as you can do the cache pools like you
>>> would do in CacheCade. The problem with ZFS or CacheCade+DRBD is that
>>> they
>>> really don't scale out well if you are looking for something with a
>>> unified
>>> name space. I'll say however that ZFS is a battle hardened FS with tons
>>> of
>>> shops using it. A lot of the whiz-bang SSD+SATA disk SAN things these
>>> smaller start up companies are hocking are just ZFS appliances.
>>>
>>> RBD looks interesting but I'm not sure if I would be willing to put
>>> production data on it, I'm not sure how performant it is IRL. From a
>>> purely technical perspective, it looks REALLY cool.
>>>
>>> I suppose anything is fast if you put SSDs in it :) GlusterFS is another
>>> option although historically small/random IO has not been it's strong
>>> point.
>>>
>>> If you are ok spending money on software and want a scale out block
>>> storage
>>> then you might want to consider HP LeftHand's VSA product. I am
>>> personally
>>> partial to NFS plays:) I went the exact opposite approach and settled on
>>> Isilon for our primary storage for our CS deployment.
>>>
>>>
>>>
>>>
>>> On Mon, Oct 22, 2012 at 10:24 AM, Nik Martin <nik.martin@nfinausa.com
>>> <mailto:nik.martin@nfinausa.**com <nik.martin@nfinausa.com>>>wrote:
>>>
>>>  On 10/22/2012 10:16 AM, Trevor Francis wrote:
>>>>
>>>>  We are looking at building a Primary Storage solution for an
>>>>> enterprise/carrier class application. However, we want to build it
>>>>> using
>>>>> a FOSS solution and not a commercial solution. Do you have a
>>>>> recommendation on platform?
>>>>>
>>>>>
>>>>>  Trevor,
>>>>
>>>> I got EXCELLENT results builing a SAN from FOSS using:
>>>> OS: Centos
>>>> Hardware: 2X storage servers, with 12x2TB 3.5 SATA drives.  LSI MegaRAID
>>>> with CacheCade Pro, with 240 GB Intel 520 SSDs configured to do SSD
>>>> caching
>>>> (alternately, look at FlashCache from Facebook)
>>>> intel 10GB dual port nics, one port for crossover, on port for up link
>>>> to
>>>> storage network
>>>>
>>>> DRBD for real time block replication to active-active
>>>> Pacemaker+corosync for HA Resource management
>>>> tgtd for iSCSI target
>>>>
>>>> If you want file backed storage, XFS is a very good filesystem on Linux
>>>> now.
>>>>
>>>> Pacemaker+Corosync can be difficult to grok at the beginning, but that
>>>> setup gave me a VERY high performance SAN.  The downside is it is
>>>> entirely
>>>> managed by CLI, no UI whatsoever.
>>>>
>>>>
>>>>  Trevor Francis
>>>>> Partner
>>>>> 46 Labs | The PeerEdge Cloud
>>>>> http://www.46labs.com <http://www.46labs.com/> |
>>>>> http://www.peeredge.net
>>>>> <http://www.peeredge.net/>
>>>>>
>>>>> 405-362-0046 - Voice  | 405-410-4980 - Cell
>>>>> trevorgfrancis - Skype
>>>>> trevor@46labs.com <mailto:trevor@46labs.com> <mailto:trevor@46labs.com
>>>>> >
>>>>>
>>>>>
>>>>> Solutions Provider for the Telecom Industry
>>>>>
>>>>> <http://www.twitter.com/****peeredge<http://www.twitter.com/**peeredge><
>>>>> http://www.twitter.com/**peeredge <http://www.twitter.com/peeredge>>><
>>>>> http://www.twitter.**com/**peeredge <http://www.twitter.com/**peeredge<http://www.twitter.com/peeredge>
>>>>> >><
>>>>> http://www.**twitter.com/**peeredge <http://twitter.com/peeredge>
<
>>>>> http://www.twitter.com/**peeredge <http://www.twitter.com/peeredge>>><
>>>>> http://**www.facebook.com/**PeerEdge<http://www.facebook.com/PeerEdge><
>>>>> http://www.facebook.com/**PeerEdge <http://www.facebook.com/PeerEdge>
>>>>> >>
>>>>>
>>>>>
>>>>>
>>>>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message