cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Fabrice Brazier <fabrice.braz...@apalia.net>
Subject RE: Primary Storage
Date Tue, 23 Oct 2012 07:55:15 GMT
Hi Andreas,

Hello i just see your configuration, it seems quite interesting.
If i understand well you want to build some zfs array on the backend.
Export luns (probably by iscsi over infiniband) to you linux cluster,
and on the linux cluster you put glusterFS.
I can understand the point, with that you can have very good performance and
reliability (zfs),
scalability and redundancy (gluster) for very low cost.
So just one question, did you try the global namespace implementation from
nexenta?
If yes can you tell me what configuration is the best for you?
I mean the fact you have a gluster cluster in the middle must impact the
overral
performance no?

Fabrice

-----Message d'origine-----
De : Andreas Huser [mailto:ahuser@7five-edv.de]
Envoyé : mardi 23 octobre 2012 05:40
À : cloudstack-users@incubator.apache.org
Objet : Re: Primary Storage

Hi,

for Cloudstack i use Solaris 11 ZFS + GlusterFS over Infiniband (RDMA). That
gives the best performance and most scalable Storage.
I have tasted some different  solutions for primary Storage but the most are
to expensive  and for a CloudStack Cluster not economic or have a poor
performance.

My Configuration:
Storage Node:
Supermicro Server (Intel Hardware) with Solaris 11 with SSD write and read
cache (read crucial-m4, write ZeusIOPS) GlusterFS and dualport ConnectX
40Gbit/s Infiniband adapter.

I have installed GlusterFS direct on Solaris with a modified code.
Want you build bigger systems for more then  50 VMs it is better you split
the Solaris and GlusterFS with a separte headnode for GlusterFS

That looks like:
Solaris ZFS Backendstorage with a dataset Volume (Thin Provision) -->  ( SRP
Target  attached direct without Infiniband switch to GF Node)  --> GlusterFS
Node the srp target formatted with xfs filesystem, create a GlusterFS
Volume --> ( Infiniband over a Mellanox Port Switch)  --> Cloudstack Node
mount glusterFS Volume over RDMA

For the Dataset Volume at the ZFS Storage, disable atime and enable
compression.
(Space reclaim) With compression you can shrink the ZFS Volume with command
at Linux dd /dev/zero or  In a Windows VM with sdelete That gives you space
left on the Primary Storage for deleted Files in a VM or for deleted vhd's
or vm's in the cloudstack

greeting Andreas




Mit freundlichen Grüßen

Andreas Huser
Geschäftsführer
System Engineer / Consultant
(Cisco CSE, SMBAM, LCSE, ASAM)
---------------------------------------
Zellerstraße 28 - 77654 Offenburg
Tel:     +49(781) 12786898
Mobil: +49(176) 10308549
ahuser@7five-edv.de




----- Ursprüngliche Mail -----

Von: "Outback Dingo" <outbackdingo@gmail.com>
An: cloudstack-users@incubator.apache.org
Gesendet: Dienstag, 23. Oktober 2012 02:15:16
Betreff: Re: Primary Storage

On Mon, Oct 22, 2012 at 8:09 PM, Ivan Rodriguez <ivanoch@gmail.com> wrote:
> Solaris 11 ZFS and yes we tried different setups, raids levels number
> of SSD cache, ARC zfs options etc etc etc.
>
> Cheers
>

VMWare ??

> On Tue, Oct 23, 2012 at 11:05 AM, Outback Dingo
> <outbackdingo@gmail.com>wrote:
>
>> On Mon, Oct 22, 2012 at 8:03 PM, Ivan Rodriguez <ivanoch@gmail.com>
>> wrote:
>> > We are using ZFS, with jbod, not in production yet exporting NFS to
>> > cloudstack, I'm not really happy about the performance
>> > but I think is related to the hardware itself rather than technology,
>> > we
>> > are using intel SR2625UR and Intel 320 SSD, we were evaluating gluster
>> > as
>> > well, but we decided to move away from that path since gluster nfs is
>> still
>> > performing poorly, plus we would like to see cloudstack integrating the
>> > gluster-fuse module, we haven't decided the final storage setup but at
>> the
>> > moment we had better results with ZFS.
>> >
>> >
>>
>> question is whos ZFS and have you "tweaked" the zfs / nfs config for
>> performance
>>
>> >
>> > On Tue, Oct 23, 2012 at 10:44 AM, Nik Martin <nik.martin@nfinausa.com
>> >wrote:
>> >
>> >> On 10/22/2012 05:49 PM, Trevor Francis wrote:
>> >>
>> >>> ZFS looks really interesting to me and I am leaning that way. I am
>> >>> considering using FreeNAS, as people seem to be having good luck with
>> >>> it. Can anyone weigh in here?
>> >>>
>> >>>
>> >> My personal opinion, I think FreeNAS and OpenFiler have horrible,
>> horrible
>> >> User Interfaces - not very intuitive, and they both seem to be file
>> servers
>> >> with things like iSCSI targets tacked on as an afterthought.
>> >>
>> >> Nik
>> >>
>> >>
>> >>> Trevor Francis
>> >>> Partner
>> >>> 46 Labs | The PeerEdge Cloud
>> >>> http://www.46labs.com <http://www.46labs.com/> |
>> http://www.peeredge.net
>> >>> <http://www.peeredge.net/>
>> >>> 405-362-0046 - Voice | 405-410-4980 - Cell
>> >>> trevorgfrancis - Skype
>> >>> trevor@46labs.com <mailto:trevor@46labs.com>
>> >>> Solutions Provider for the Telecom Industry
>> >>>
>> >>> <http://www.twitter.com/**peeredge
>> >>> <http://www.twitter.com/peeredge>><
>> >>> http://www.twitter.**com/peeredge <http://www.twitter.com/peeredge>><
>> >>> http://www.**twitter.com/peeredge <http://www.twitter.com/peeredge>><
>> >>> http://**www.facebook.com/PeerEdge
>> >>> <http://www.facebook.com/PeerEdge>>
>> >>>
>> >>> On Oct 22, 2012, at 2:30 PM, Jason Davis wrote:
>> >>>
>> >>> ZFS would be an interesting setup as you can do the cache pools like
>> you
>> >>>> would do in CacheCade. The problem with ZFS or CacheCade+DRBD is
>> >>>> that
>> >>>> they
>> >>>> really don't scale out well if you are looking for something with
a
>> >>>> unified
>> >>>> name space. I'll say however that ZFS is a battle hardened FS with
>> tons
>> >>>> of
>> >>>> shops using it. A lot of the whiz-bang SSD+SATA disk SAN things
>> >>>> these
>> >>>> smaller start up companies are hocking are just ZFS appliances.
>> >>>>
>> >>>> RBD looks interesting but I'm not sure if I would be willing to
put
>> >>>> production data on it, I'm not sure how performant it is IRL. From
a
>> >>>> purely technical perspective, it looks REALLY cool.
>> >>>>
>> >>>> I suppose anything is fast if you put SSDs in it :) GlusterFS is
>> another
>> >>>> option although historically small/random IO has not been it's
>> >>>> strong
>> >>>> point.
>> >>>>
>> >>>> If you are ok spending money on software and want a scale out block
>> >>>> storage
>> >>>> then you might want to consider HP LeftHand's VSA product. I am
>> >>>> personally
>> >>>> partial to NFS plays:) I went the exact opposite approach and
>> >>>> settled
>> on
>> >>>> Isilon for our primary storage for our CS deployment.
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> On Mon, Oct 22, 2012 at 10:24 AM, Nik Martin
>> >>>> <nik.martin@nfinausa.com
>> >>>> <mailto:nik.martin@nfinausa.**com <nik.martin@nfinausa.com>>>wrote:
>> >>>>
>> >>>> On 10/22/2012 10:16 AM, Trevor Francis wrote:
>> >>>>>
>> >>>>> We are looking at building a Primary Storage solution for an
>> >>>>>> enterprise/carrier class application. However, we want to
build it
>> >>>>>> using
>> >>>>>> a FOSS solution and not a commercial solution. Do you have
a
>> >>>>>> recommendation on platform?
>> >>>>>>
>> >>>>>>
>> >>>>>> Trevor,
>> >>>>>
>> >>>>> I got EXCELLENT results builing a SAN from FOSS using:
>> >>>>> OS: Centos
>> >>>>> Hardware: 2X storage servers, with 12x2TB 3.5 SATA drives. LSI
>> MegaRAID
>> >>>>> with CacheCade Pro, with 240 GB Intel 520 SSDs configured to
do SSD
>> >>>>> caching
>> >>>>> (alternately, look at FlashCache from Facebook)
>> >>>>> intel 10GB dual port nics, one port for crossover, on port for
up
>> link
>> >>>>> to
>> >>>>> storage network
>> >>>>>
>> >>>>> DRBD for real time block replication to active-active
>> >>>>> Pacemaker+corosync for HA Resource management
>> >>>>> tgtd for iSCSI target
>> >>>>>
>> >>>>> If you want file backed storage, XFS is a very good filesystem
on
>> Linux
>> >>>>> now.
>> >>>>>
>> >>>>> Pacemaker+Corosync can be difficult to grok at the beginning,
but
>> that
>> >>>>> setup gave me a VERY high performance SAN. The downside is it
is
>> >>>>> entirely
>> >>>>> managed by CLI, no UI whatsoever.
>> >>>>>
>> >>>>>
>> >>>>> Trevor Francis
>> >>>>>> Partner
>> >>>>>> 46 Labs | The PeerEdge Cloud
>> >>>>>> http://www.46labs.com <http://www.46labs.com/> |
>> >>>>>> http://www.peeredge.net
>> >>>>>> <http://www.peeredge.net/>
>> >>>>>>
>> >>>>>> 405-362-0046 - Voice | 405-410-4980 - Cell
>> >>>>>> trevorgfrancis - Skype
>> >>>>>> trevor@46labs.com <mailto:trevor@46labs.com> <mailto:
>> trevor@46labs.com
>> >>>>>> >
>> >>>>>>
>> >>>>>>
>> >>>>>> Solutions Provider for the Telecom Industry
>> >>>>>>
>> >>>>>> <http://www.twitter.com/****peeredge<
>> http://www.twitter.com/**peeredge><
>> >>>>>> http://www.twitter.com/**peeredge <http://www.twitter.com/peeredge
>> >>><
>> >>>>>> http://www.twitter.**com/**peeredge <
>> http://www.twitter.com/**peeredge<http://www.twitter.com/peeredge>
>> >>>>>> >><
>> >>>>>> http://www.**twitter.com/**peeredge <http://twitter.com/peeredge>
>> >>>>>> <
>> >>>>>> http://www.twitter.com/**peeredge <http://www.twitter.com/peeredge
>> >>><
>> >>>>>> http://**www.facebook.com/**PeerEdge<
>> http://www.facebook.com/PeerEdge><
>> >>>>>> http://www.facebook.com/**PeerEdge <
>> http://www.facebook.com/PeerEdge>
>> >>>>>> >>
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>
>>

Mime
View raw message