cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ivan Kudryavtsev <kudryavtsev...@bw-sw.com>
Subject Re: [VOTE] Ceph, ZFS or Linux Soft RAID?
Date Mon, 15 Jul 2019 11:23:11 GMT
Hi,

if you use local fs, use just ext4 over the required disk topology which
gives the desired redundancy.

E.g. JBOD, R0 work well when data safety policy is established and backups
are maintained well.

Otherwise look to R5, R10 or R6.

пн, 15 июл. 2019 г., 18:05 <nux@li.nux.ro>:

> Isn't that a bit apples and oranges? Ceph is a network distributed
> thingy, not a local solution.
>
> I'd use linux/software raid + lvm, it's the only one supported (by
> CentOS/RedHat).
>
> ZFS on Linux could be interesting if it was supported by Cloudstack, but
> it is not, you'd end up using qcow2 (COW) files on top of a COW
> filesystem which could lead to issues. Also ZFS is not really the
> fastest fs out there, though it does have some nice features.
>
> Did you really mean raid 0? I hope you have backups. :)
>
> hth
>
>
> On 2019-07-15 11:49, Fariborz Navidan wrote:
> > Hello,
> >
> > Which one do you think is faster to use for local soft Raid-0 for
> > primary
> > storage? Ceph, ZFS or Built-in soft raid manager of CentOS? Which one
> > can
> > gives us better IOPS and IO latency on NVMe SSD disks? The storage will
> > be
> > used for production cloud environment where arround 60 VMs will run on
> > top
> > of it.
> >
> > Your ides are highly appreciated
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message