cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Melanie Desaive <>
Subject Long downtimes for VMs through automatically triggered storage migration
Date Wed, 12 Oct 2016 10:50:23 GMT
Hi all,

my college and I are having a dispute on when cloudstack should
automatically trigger storage migrations and what options we have to
control cloudstacks behavior in terms of storage migrations.

We are operating a setup with two XenServer clusters which are combined
into one pod. Each cluster with its own independent SRs of type lvmoiscsi.

Unfortunately we had a XenServer bug, which prevented a few VMs to start
on any compute node. Any time this bug appeared, CloudStack tried to
start the concerned VM successively on each node of the actual cluster
and afterwards started a storage migration to the second cluster.

We are using UserDispersing deployment planner.

The decision of the deployment planner to start the storage migration
was very unfortunate for us. Mainly because:
 * We are operating some VMs with big data volumes which where
inaccessible for the time the storage migration was running.
 * The SR on the destination cluster did not even have the capacity to
take all volumes of the big VMs. Still the migration was triggered.

We would like to have some kind of best practice advice on how other are
preventing long, unplanned downtimes for VMs with huge data volumes
through automated storage migration.

We discussed the topic and came up with the following questions:
 * Is the described behaviour of the deployment planner intentional?
 * Is it possible to prevent some few VMs with huge storage volumes from
automated storage migration and what would be the best way to achieve
this? Could we use storage or host tags for this purpose?
 * Is it possible to globally prevent the deployment planner from
starting storage migrations?
    * Are there global settings to achieve this?
    * Would we have to adapt the deployment planner?
 * Do we have to rethink our system architecture and avoid huge data
volumes completely?
 * Was the decision to put two clusters into one pod a bad idea?
 * Are there other solutions to our problem?

We would greatly appreciate any advice in the issue!

Best regards,



Heinlein Support GmbH
Linux: Akademie - Support - Hosting
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin

View raw message