cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Nalley <>
Subject Re: Backup projects?
Date Thu, 07 Mar 2013 12:34:29 GMT
2013/3/7 Joaquin Angel Alonso Alvarez <>
> Hi,
> Is there a way to backup a project (VMs, and network configuration)?
> The idea is to set-up laboratories and be able to get back to a particular state, even
to delete the project and recover it for a similar future lab, not to start it all over.
> Thanks in advance,

So sorta:

In a less than ideal world you'd have snapshots of machines to
represent state that you care about and a marvin config file to
redeploy your environment. Bug again that's less than ideal, and it's
a role CloudStack isn't really designed to solve.

Ideally you'd be using configuration management. A puppet or a chef
would allow you to define how a machine is configured. Additionally,
there are, for sake of simplicity what we will call, plugins to each
of those systems that allow you to define network, machine size, etc.
So you could have a hadoop cluster configured thusly:

"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "production",
"servers": [
    "name": "zookeeper-a, zookeeper-b, zookeeper-c",
    "description": "Zookeeper nodes",
    "template": "rhel-5.6-base",
    "service": "small",
    "port_rules": "2181",
    "run_list": "role[cluster_a], role[zookeeper_server]",
    "actions": [
      { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] }
    "name": "hadoop-master",
    "description": "Hadoop master node",
    "template": "rhel-5.6-base",
    "service": "large",
    "networks": "app-net, storage-net",
    "port_rules": "50070, 50030, 60010",
    "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]"
    "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c",
    "description": "Hadoop worker nodes",
    "template": "rhel-5.6-base",
    "service": "medium",
    "port_rules": "50075, 50060, 60030",
    "run_list": "role[cluster_a], role[hadoop_worker],
    "actions": [
      { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] },
      { "http_request": "http://${hadoop-master}:50070/index.jsp" }

View raw message