cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Simon Weller <swel...@ena.com.INVALID>
Subject Re: CEPH / CloudStack features
Date Fri, 27 Jul 2018 14:28:28 GMT
They're volume based snapshots at this point. We've looked at what it would take to support
VMsnapshots, but we're not there yet, as the memory would need to be stored outside of the
actual volume.

Primary snapshots work well. We still need to reintroduce the code that allows for disabling
primary to secondary coping of snapshots should an organization not want to do that.


Templates are also pre-cached into Ceph to speed up deployment of VMs as Wido indicates below.
This greatly reduced the secondary to primary copying of template images.
Live migration works well land has since Wido introduced the Ceph features years ago.

We have started looking at what it would take to support Ceph volume replication between zones/regions,
as that would be a great Business Continuity feature.


________________________________
From: Dag Sonstebo <Dag.Sonstebo@shapeblue.com>
Sent: Friday, July 27, 2018 8:32 AM
To: dev@cloudstack.apache.org
Subject: Re: CEPH / CloudStack features

Excellent, thanks Wido.

When you say snapshotting – is this VM snapshots, volume snapshots or both?

How about live migration, does this work?

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 27/07/2018, 13:41, "Wido den Hollander" <wido@widodh.nl> wrote:

    Hi,

    On 07/27/2018 12:18 PM, Dag Sonstebo wrote:
    > Hi all,
    >
    > I’m trying to find out more about CEPH compatibility with CloudStack / KVM –
i.e. trying to put together a feature matrix of what works  and what doesn’t compared to
NFS (or other block storage platforms).
    > There’s not a lot of up to date information on this – the configuration guide
on [1] is all I’ve located so far apart from a couple of one-liners in the official documentation.
    >
    > Could I get some feedback from the Ceph users in the community?
    >

    Yes! So, at first, Ceph is KVM-only. Other hypervisors do not support
    RBD (RADOS Block Device) from Ceph.

    What is supported:

    - Thin provisioning
    - Discard / fstrim (Requires VirtIO-SCSI)
    - Volume cloning
    - Snapshots
    - Disk I/O throttling (done by libvirt)

    Meaning, when a template is deployed for the first time in a Primary
    Storage it's written to Ceph and all other Instances afterwards are a
    clone of that primary image.

    You can snapshot a RBD image and then have it copied to Secondary
    Storage. Now, I'm not sure if keeping the snapshot in Primary Storage
    and reverting works yet, I haven't looked at that in recent times.

    The snapshotting part on Primary Storage is probably something that
    needs some love and attention, but otherwise I think all other features
    are supported.

    I would recommend a CentOS 7 or Ubuntu 16.04/18.04 hypervisor, both work
    just fine with Ceph.

    Wido

    > Regards,
    > Dag Sonstebo
    >
    > [1] http://docs.ceph.com/docs/master/rbd/rbd-cloudstack/
    >
    > Dag.Sonstebo@shapeblue.com
    > www.shapeblue.com<http://www.shapeblue.com>
    > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
    > @shapeblue
    >
    >
    >



Dag.Sonstebo@shapeblue.com
www.shapeblue.com<http://www.shapeblue.com>
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message