cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mads Nordholm <m...@nordholm.dk>
Subject Re: Hardware question
Date Tue, 03 Mar 2015 14:35:32 GMT
I realise that this discussion has gone a bit off topic, but since my
concerns regarding how to setup storage also greatly influence what
hardware I will end up buying, it's not entirely off topic IMHO.

In any event, I greatly appreciate all the input on different storage
setups, and I also realise that I will have to make up my own mind at some
point, since all of you have different experiences with different setups.
Not an easy choice at all...

--
Mads Nordholm

On Tue, Mar 3, 2015 at 9:27 PM, Andrija Panic <andrija.panic@gmail.com>
wrote:

> I just had one HDD died in the CEPH cluster, and during rebuilding of the
> cluster/re-healing, another disk COMPLETELY died - missing from the sytem.
>
> This is what sometimes happens with RAID5 - so avoid RAID5 for sure.
>
> I'm be going with 6 HDDs in RAID6 (RadiZ2) actually, with SSD for ZIL/L2ARC
> (write chache, and layer 2 read cache for translation :) ).
> Compression on top of ZFS works miracles in my testing.
>
> SSDs does also die from time to time (I'm not talking here about to much
> writes and wearing out, they simply just die sometimes, completely), so
> again avoid RAID5, and RADI10 seems to expensive in my opinion but is the
> best - RAID6 seems like the middle enought of parity/security on data, and
> more than enough of speed (I have once tested RAID 0 over 6 x 1TB SSDs, man
> that works like crazy.... :D )
>
> Also, be VERY specific on SSDs - Intel S3500 or S3700, althought enterprise
> drives (I'v had 100/120GB models) are as slow as crap (slowest capactiy,
> bigger ones might be better I guess... Sequential speed was less than
> sequential speed on HDDs, etc.etc...
>
> my 2 cents
>
>
>
> On 3 March 2015 at 15:17, Tomasz Chendynski <
> tomasz_chendynski@polcom.com.pl
> > wrote:
>
> > Hi Mads,
> > Please see this article a bit old now.
> > http://www.infostor.com/disk-arrays/skyera-raid-5-kills-ssd-arrays.html
> >
> > I think you should look for AFA solutions (PureStorage - our T0 storage)
> > with inline deduplication and compression.
> > I think that RAID 6 is a bad idea.
> >
> > Tomek
> >
> >
> > W dniu 2015-03-03 o 14:20, Mads Nordholm pisze:
> >
> >  Very useful input indeed. I think I might end up going with a more
> >> conventional setup for starters, and then play with CEPH on the site.
> And
> >> that then leads to another question: Does anybody have some input on
> what
> >> RAID level to use for a more conventional storage setup? I am looking at
> >> deploying a setup that exclusively uses SSD, so I am probably a bit more
> >> interested in getting as many usable GBs as possible, than I am in
> >> optimising I/O.
> >>
> >> So far, I have been hearing people advocating RAID 10 as well as RAID
> 6. I
> >> am personally leaning towards RAID 6, but I would love to get some input
> >> from someone with more experience using these different RAID levels in
> >> production.
> >>
> >> --
> >> Mads Nordholm
> >>
> >> On Tue, Mar 3, 2015 at 7:34 PM, Vadim Kimlaychuk <
> >> Vadim.Kimlaychuk@elion.ee>
> >> wrote:
> >>
> >>  Andrija,
> >>>
> >>>          This is my choise already -- FreeBSD + ZFS with SSD for
> >>> ZIL/L2ARC
> >>> cache + NFS.  Going to be at production within couple of weeks. You
> have
> >>> read my thoughts ! :)
> >>>
> >>> Vadim.
> >>>
> >>> -----Original Message-----
> >>> From: Andrija Panic [mailto:andrija.panic@gmail.com]
> >>> Sent: Tuesday, March 03, 2015 2:25 PM
> >>> To: users@cloudstack.apache.org
> >>> Subject: Re: Hardware question
> >>>
> >>> I'm personaly having fights with CEPh used for Primary storage - I ike
> >>> CEPH VERY MUCH, but hate it at the same time (hars word, I know...)
> >>>
> >>> For Primary storage - my suggestions, play arround if you like, but
> avoid
> >>> it at the end...till it matures better, or simply the integration with
> >>> CEPH
> >>> matures better.
> >>>
> >>> If you are not using 10G network and serious hardware - it's crappy
> >>> experience... SSD for Journal, etc...
> >>>
> >>> It's a fight  - whenever I do some maintance on CEPH I end up swetting,
> >>> clients asking why is everythgin so slow, etc...
> >>>
> >>> For our next cloud, I'm going with ZFS/NFS definitively...
> >>>
> >>> Be warned :)
> >>>
> >>> Cheers
> >>>
> >>> On 3 March 2015 at 13:15, Vadim Kimlaychuk <Vadim.Kimlaychuk@elion.ee>
> >>> wrote:
> >>>
> >>>  Mads,
> >>>>
> >>>>          CEPH is good indeed, but keep in mind that you should really
> >>>> be expert at this type of SDS. There are points that are not visible
> >>>> from the first look and may bring some unpleasent surprises.  For
> >>>>
> >>> example: "default"
> >>>
> >>>> option for storage I have tested was to make snapshots automatically
> >>>> from the files being saved to primary storage. As a consequence when
> >>>> you delete VM there are artifacts (snapshots) that are connect to
> >>>> deleted VM not being deleted by Cloudstack (since CS does not know
> they
> >>>>
> >>> exist).
> >>>
> >>>>                 Another point - you can't directly use it as secondary
> >>>> storage. Need to set-up application server and run RadosGW.
> >>>> Performance - is a big question mark here. You need NFS or iSCSI
> anyway.
> >>>>          What we haven't fully tested - disaster recovery or
> >>>> malfunction simulation. You must know how to recover from all types
of
> >>>> the faults. It is very easy to lose everything by just doing wrong
> >>>> things (or in wrong order).  From my point of view Ceph is rather
> >>>> complex to start together with CS. It may be easy to set up, but not
> so
> >>>>
> >>> easy to manage.
> >>>
> >>>>          Will suggest you to run it like a year at development to make
> >>>> yourself confident you can manage it.
> >>>>
> >>>> Regards,
> >>>>
> >>>> Vadim.
> >>>>
> >>>> -----Original Message-----
> >>>> From: Mads Nordholm [mailto:mads@nordholm.dk]
> >>>> Sent: Monday, March 02, 2015 8:16 PM
> >>>> To: users@cloudstack.apache.org
> >>>> Subject: Re: Hardware question
> >>>>
> >>>> Thanks a lot for your answer, Lucian. CEPH sounds like a very
> >>>> interesting solution. I will have to do some more research on that.
> >>>>
> >>>> --
> >>>> Mads Nordholm
> >>>>
> >>>> On Tue, Mar 3, 2015 at 12:32 AM, Nux! <nux@li.nux.ro> wrote:
> >>>>
> >>>>  Hi Mads,
> >>>>>
> >>>>> Imo, if you want that flexibility you should go with non-local
> storage.
> >>>>> CEPH is a popular choice here, but you will need 10 Gbps between
> >>>>> hypervisors and storage servers if you want reasonable performance.
> >>>>> So, if you need more storage just add more CEPH servers. Need more
> >>>>> compute, add more hypervisors.
> >>>>>
> >>>>> HTH
> >>>>> Lucian
> >>>>>
> >>>>> --
> >>>>> Sent from the Delta quadrant using Borg technology!
> >>>>>
> >>>>> Nux!
> >>>>> www.nux.ro
> >>>>>
> >>>>> ----- Original Message -----
> >>>>>
> >>>>>> From: "Mads Nordholm" <mads@nordholm.dk>
> >>>>>> To: users@cloudstack.apache.org
> >>>>>> Sent: Monday, 2 March, 2015 17:19:40
> >>>>>> Subject: Hardware question
> >>>>>> I am planning a small Cloudstack setup (using KVM for
> >>>>>> virtualisation)
> >>>>>>
> >>>>> that
> >>>>>
> >>>>>> will allow me to run roughly 100 VPSs with these average
> >>>>>>
> >>>>> requirements:
> >>>
> >>>> - 1 core
> >>>>>> - 512 MB RAM
> >>>>>> - 20 GB SSD
> >>>>>>
> >>>>>> I am interested in input regarding a hardware configuration
that
> >>>>>> will support this, and how to best build a small setup that
will
> >>>>>> scale easily
> >>>>>>
> >>>>> as
> >>>>>
> >>>>>> I grow. Within a year or so, I expect to have more than 1,000
> >>>>>> guest
> >>>>>>
> >>>>> running.
> >>>>>
> >>>>>> I basically need a setup that will not completely break the
bank
> >>>>>> as I
> >>>>>>
> >>>>> start
> >>>>>
> >>>>>> out, but also one that will scale well as I grow. I am
> >>>>>> particularly concerned with being able to add only the resources
I
> >>>>>> need. If I need
> >>>>>>
> >>>>> more
> >>>>>
> >>>>>> storage, I want to be able to add only that (preferably just
by
> >>>>>> adding disks to a RAID array), and if I need more computing
power,
> >>>>>> I want to be able to add only that.
> >>>>>>
> >>>>>> Any input greatly appreciated.
> >>>>>>
> >>>>>> --
> >>>>>> Mads Nordholm
> >>>>>>
> >>>>>
> >>>
> >>> --
> >>>
> >>> Andrija Panić
> >>>
> >>>
> >
>
>
> --
>
> Andrija Panić
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message