cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Simon Weller <swel...@ena.com>
Subject Re: Long downtimes for VMs through automatically triggered storage migration
Date Wed, 12 Oct 2016 13:47:52 GMT
Hi Melanie,


So if I understand correctly here, you have 2 clusters within a pod and you're using cluster
level storage (meaning each cluster has it's own primary storage).
There is a global configuration item for preventing CloudStack from attempting to automatically
migrate across primary storage arrays. It's called enable.ha.storage.migration.
If you set this to false, it will prevent an attempted storage migration on HA.

- Si

________________________________
From: Melanie Desaive <m.desaive@heinlein-support.de>
Sent: Wednesday, October 12, 2016 5:50 AM
To: users@cloudstack.apache.org
Subject: Long downtimes for VMs through automatically triggered storage migration

Hi all,

my college and I are having a dispute on when cloudstack should
automatically trigger storage migrations and what options we have to
control cloudstacks behavior in terms of storage migrations.

We are operating a setup with two XenServer clusters which are combined
into one pod. Each cluster with its own independent SRs of type lvmoiscsi.

Unfortunately we had a XenServer bug, which prevented a few VMs to start
on any compute node. Any time this bug appeared, CloudStack tried to
start the concerned VM successively on each node of the actual cluster
and afterwards started a storage migration to the second cluster.

We are using UserDispersing deployment planner.

The decision of the deployment planner to start the storage migration
was very unfortunate for us. Mainly because:
 * We are operating some VMs with big data volumes which where
inaccessible for the time the storage migration was running.
 * The SR on the destination cluster did not even have the capacity to
take all volumes of the big VMs. Still the migration was triggered.

We would like to have some kind of best practice advice on how other are
preventing long, unplanned downtimes for VMs with huge data volumes
through automated storage migration.

We discussed the topic and came up with the following questions:
 * Is the described behaviour of the deployment planner intentional?
 * Is it possible to prevent some few VMs with huge storage volumes from
automated storage migration and what would be the best way to achieve
this? Could we use storage or host tags for this purpose?
 * Is it possible to globally prevent the deployment planner from
starting storage migrations?
    * Are there global settings to achieve this?
    * Would we have to adapt the deployment planner?
 * Do we have to rethink our system architecture and avoid huge data
volumes completely?
 * Was the decision to put two clusters into one pod a bad idea?
 * Are there other solutions to our problem?

We would greatly appreciate any advice in the issue!

Best regards,

Melanie

--
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Linux: Support, Consulting, Kurs, Training, Schulung ...<http://www.heinlein-support.de/>
www.heinlein-support.de
Wir bieten Wissen und Erfahrung ...und Sie können sich aussuchen, wie Sie beides nutzen.
Profitieren Sie vom Wissen in unseren Linux-Schulungen an unserer Akademie ...



Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message