cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephan Seitz <>
Subject Re: CloudStack Design: Ceph and local storage
Date Fri, 17 Jun 2016 13:59:31 GMT

Independently from cloudstack, I'ld strongly recommend to not use ceph
and hypervisors on the very same machines. If you just want to build a
POC this is fine, but If you put load on it, you'll see unpredictible
behavior (at least on the ceph side) due to heavy I/O demands.
Ceph recommends at least 1 Core and 1 GB RAM as a rule of thumb for
each OSD.
BTW. I also won't run a ceph cluster with only two nodes. Your MON
should be able to form a quorum, so you'ld need at least three  nodes.

If you run a cluster with less than about 6 or 8 nodes, I'ld give
gluster a try. I've never tried it myself but I assume this should
be usable as "pre-setup" Storage at least with KVM Hosts.


- Stephan

Am Freitag, den 17.06.2016, 13:36 +0200 schrieb Jeroen Keerrel:
> Good afternoon from Hamburg, Germany!
> Short question:
> Is it feasible to use CloudStack with Ceph on local storage? As in
> “hyperconverged”?
> Before ramping up the infrastructure, I’d like to be sure, before
> buying new hardware.
> At the moment: 2 Hosts, each 2 6c XEON CPU, 24GB RAM and each have 6
> 300GB SAS drives.
> According to CEPH, they advise bigger disks and separate storage
> “nodes”.
> CloudStack documentation says: Smaller, High RPM disks.
> What would you advise? Buy separate “Storage Nodes” or  ramp up the
> current nodes?
> Cheers!
> Jeroen
> Jeroen Keerl
> Keerl IT Services GmbH
> Birkenstraße 1b . 21521 Aumühle
> +49 177 6320 317
> Geschäftsführer. Jacobus J. Keerl
> Registergericht Lubeck. HRB-Nr. 14511
> Unsere Allgemeine Geschäftsbedingungen finden Sie hier.

View raw message