Return-Path: X-Original-To: apmail-incubator-cloudstack-commits-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 13F32E07D for ; Thu, 6 Dec 2012 08:10:04 +0000 (UTC) Received: (qmail 87002 invoked by uid 500); 6 Dec 2012 08:09:28 -0000 Delivered-To: apmail-incubator-cloudstack-commits-archive@incubator.apache.org Received: (qmail 86403 invoked by uid 500); 6 Dec 2012 08:09:24 -0000 Mailing-List: contact cloudstack-commits-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-commits@incubator.apache.org Received: (qmail 81103 invoked by uid 99); 6 Dec 2012 08:09:11 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Dec 2012 08:09:11 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 782968194DF; Thu, 6 Dec 2012 08:09:11 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: muralireddy@apache.org To: cloudstack-commits@incubator.apache.org X-Mailer: ASF-Git Admin Mailer Subject: [58/100] [abbrv] [partial] Revised en-US/network-setup.xml to include the correct file. Message-Id: <20121206080911.782968194DF@tyr.zones.apache.org> Date: Thu, 6 Dec 2012 08:09:11 +0000 (UTC) http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/import-ami.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/import-ami.html b/docs/tmp/en-US/epub/OEBPS/import-ami.html deleted file mode 100644 index 10e7539..0000000 --- a/docs/tmp/en-US/epub/OEBPS/import-ami.html +++ /dev/null @@ -1,97 +0,0 @@ - - -12.11. Importing Amazon Machine Images

12.11. Importing Amazon Machine Images

- The following procedures describe how to import an Amazon Machine Image (AMI) into CloudStack when using the XenServer hypervisor. -
- Assume you have an AMI file and this file is called CentOS_6.2_x64. Assume further that you are working on a CentOS host. If the AMI is a Fedora image, you need to be working on a Fedora host initially. -
- You need to have a XenServer host with a file-based storage repository (either a local ext3 SR or an NFS SR) to convert to a VHD once the image file has been customized on the Centos/Fedora host. -

Note

- When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text. -
  1. - Set up loopback on image file: -
    # mkdir -p /mnt/loop/centos62
    -# mount -o loop  CentOS_6.2_x64 /mnt/loop/centos54
    -
  2. - Install the kernel-xen package into the image. This downloads the PV kernel and ramdisk to the image. -
    # yum -c /mnt/loop/centos54/etc/yum.conf --installroot=/mnt/loop/centos62/ -y install kernel-xen
  3. - Create a grub entry in /boot/grub/grub.conf. -
    # mkdir -p /mnt/loop/centos62/boot/grub
    -# touch /mnt/loop/centos62/boot/grub/grub.conf
    -# echo "" > /mnt/loop/centos62/boot/grub/grub.conf
    -
  4. - Determine the name of the PV kernel that has been installed into the image. -
    # cd /mnt/loop/centos62
    -# ls lib/modules/
    -2.6.16.33-xenU  2.6.16-xenU  2.6.18-164.15.1.el5xen  2.6.18-164.6.1.el5.centos.plus  2.6.18-xenU-ec2-v1.0  2.6.21.7-2.fc8xen  2.6.31-302-ec2
    -# ls boot/initrd*
    -boot/initrd-2.6.18-164.6.1.el5.centos.plus.img boot/initrd-2.6.18-164.15.1.el5xen.img
    -# ls boot/vmlinuz*
    -boot/vmlinuz-2.6.18-164.15.1.el5xen  boot/vmlinuz-2.6.18-164.6.1.el5.centos.plus  boot/vmlinuz-2.6.18-xenU-ec2-v1.0  boot/vmlinuz-2.6.21-2952.fc8xen
    -
    - Xen kernels/ramdisk always end with "xen". For the kernel version you choose, there has to be an entry for that version under lib/modules, there has to be an initrd and vmlinuz corresponding to that. Above, the only kernel that satisfies this condition is 2.6.18-164.15.1.el5xen. -
  5. - Based on your findings, create an entry in the grub.conf file. Below is an example entry. -
    default=0
    -timeout=5
    -hiddenmenu
    -title CentOS (2.6.18-164.15.1.el5xen)
    -        root (hd0,0)
    -        kernel /boot/vmlinuz-2.6.18-164.15.1.el5xen ro root=/dev/xvda 
    -        initrd /boot/initrd-2.6.18-164.15.1.el5xen.img
    -
  6. - Edit etc/fstab, changing “sda1” to “xvda” and changing “sdb” to “xvdb”. -
    # cat etc/fstab
    -/dev/xvda  /         ext3    defaults        1 1
    -/dev/xvdb  /mnt      ext3    defaults        0 0
    -none       /dev/pts  devpts  gid=5,mode=620  0 0
    -none       /proc     proc    defaults        0 0
    -none       /sys      sysfs   defaults        0 0
    -
  7. - Enable login via the console. The default console device in a XenServer system is xvc0. Ensure that etc/inittab and etc/securetty have the following lines respectively: -
    # grep xvc0 etc/inittab 
    -co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
    -# grep xvc0 etc/securetty 
    -xvc0
    -
  8. - Ensure the ramdisk supports PV disk and PV network. Customize this for the kernel version you have determined above. -
    # chroot /mnt/loop/centos54
    -# cd /boot/
    -# mv initrd-2.6.18-164.15.1.el5xen.img initrd-2.6.18-164.15.1.el5xen.img.bak
    -# mkinitrd -f /boot/initrd-2.6.18-164.15.1.el5xen.img --with=xennet --preload=xenblk --omit-scsi-modules 2.6.18-164.15.1.el5xen
    -
  9. - Change the password. -
    # passwd
    -Changing password for user root.
    -New UNIX password: 
    -Retype new UNIX password: 
    -passwd: all authentication tokens updated successfully.
    -
  10. - Exit out of chroot. -
    # exit
  11. - Check etc/ssh/sshd_config for lines allowing ssh login using a password. -
    # egrep "PermitRootLogin|PasswordAuthentication" /mnt/loop/centos54/etc/ssh/sshd_config  
    -PermitRootLogin yes
    -PasswordAuthentication yes
    -
  12. - If you need the template to be enabled to reset passwords from the CloudStack UI or API, install the password change script into the image at this point. See Section 12.13, “Adding Password Management to Your Templates”. -
  13. - Unmount and delete loopback mount. -
    # umount /mnt/loop/centos54
    -# losetup -d /dev/loop0
    -
  14. - Copy the image file to your XenServer host's file-based storage repository. In the example below, the Xenserver is "xenhost". This XenServer has an NFS repository whose uuid is a9c5b8c8-536b-a193-a6dc-51af3e5ff799. -
    # scp CentOS_6.2_x64 xenhost:/var/run/sr-mount/a9c5b8c8-536b-a193-a6dc-51af3e5ff799/
  15. - Log in to the Xenserver and create a VDI the same size as the image. -
    [root@xenhost ~]# cd /var/run/sr-mount/a9c5b8c8-536b-a193-a6dc-51af3e5ff799
    -[root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]#  ls -lh CentOS_6.2_x64
    --rw-r--r-- 1 root root 10G Mar 16 16:49 CentOS_6.2_x64
    -[root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# xe vdi-create virtual-size=10GiB sr-uuid=a9c5b8c8-536b-a193-a6dc-51af3e5ff799 type=user name-label="Centos 6.2 x86_64"
    -cad7317c-258b-4ef7-b207-cdf0283a7923
    -
  16. - Import the image file into the VDI. This may take 10–20 minutes. -
    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# xe vdi-import filename=CentOS_6.2_x64 uuid=cad7317c-258b-4ef7-b207-cdf0283a7923
  17. - Locate a the VHD file. This is the file with the VDI’s UUID as its name. Compress it and upload it to your web server. -
    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# bzip2 -c cad7317c-258b-4ef7-b207-cdf0283a7923.vhd > CentOS_6.2_x64.vhd.bz2
    -[root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# scp CentOS_6.2_x64.vhd.bz2 webserver:/var/www/html/templates/
    -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/increase-management-server-max-memory.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/increase-management-server-max-memory.html b/docs/tmp/en-US/epub/OEBPS/increase-management-server-max-memory.html deleted file mode 100644 index d6422ae..0000000 --- a/docs/tmp/en-US/epub/OEBPS/increase-management-server-max-memory.html +++ /dev/null @@ -1,15 +0,0 @@ - - -21.2. Increase Management Server Maximum Memory

21.2. Increase Management Server Maximum Memory

- If the Management Server is subject to high demand, the default maximum JVM memory allocation can be insufficient. To increase the memory: -
  1. - Edit the Tomcat configuration file: -
    /etc/cloud/management/tomcat6.conf
  2. - Change the command-line parameter -XmxNNNm to a higher value of N. -
    - For example, if the current value is -Xmx128m, change it to -Xmx1024m or higher. -
  3. - To put the new setting into effect, restart the Management Server. -
    # service cloud-management restart
- For more information about memory issues, see "FAQ: Memory" at Tomcat Wiki. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/index.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/index.html b/docs/tmp/en-US/epub/OEBPS/index.html deleted file mode 100644 index 8f174c6..0000000 --- a/docs/tmp/en-US/epub/OEBPS/index.html +++ /dev/null @@ -1,18 +0,0 @@ - - -CloudStack Administrator's Guide
Apache CloudStack 4.0.0-incubating

CloudStack Administrator's Guide

Edition 1

- - -

Apache CloudStack


Legal Notice

- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at -
- http://www.apache.org/licenses/LICENSE-2.0 -
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -
- Apache CloudStack is an effort undergoing incubation at The Apache Software Foundation (ASF). -
- Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. -
Abstract
- Administration Guide for CloudStack. -

http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/initialize-and-test.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/initialize-and-test.html b/docs/tmp/en-US/epub/OEBPS/initialize-and-test.html deleted file mode 100644 index 8f499b5..0000000 --- a/docs/tmp/en-US/epub/OEBPS/initialize-and-test.html +++ /dev/null @@ -1,33 +0,0 @@ - - -7.8. Initialize and Test

7.8. Initialize and Test

- After everything is configured, CloudStack will perform its initialization. This can take 30 minutes or more, depending on the speed of your network. When the initialization has completed successfully, the administrator's Dashboard should be displayed in the CloudStack UI. -
  1. - Verify that the system is ready. In the left navigation bar, select Templates. Click on the CentOS 5.5 (64bit) no Gui (KVM) template. Check to be sure that the status is "Download Complete." Do not proceed to the next step until this status is displayed. -
  2. - Go to the Instances tab, and filter by My Instances. -
  3. - Click Add Instance and follow the steps in the wizard. -
    1. - Choose the zone you just added. -
    2. - In the template selection, choose the template to use in the VM. If this is a fresh installation, likely only the provided CentOS template is available. -
    3. - Select a service offering. Be sure that the hardware you have allows starting the selected service offering. -
    4. - In data disk offering, if desired, add another data disk. This is a second volume that will be available to but not mounted in the guest. For example, in Linux on XenServer you will see /dev/xvdb in the guest after rebooting the VM. A reboot is not required if you have a PV-enabled OS kernel in use. -
    5. - In default network, choose the primary network for the guest. In a trial installation, you would have only one option here. -
    6. - Optionally give your VM a name and a group. Use any descriptive text you would like. -
    7. - Click Launch VM. Your VM will be created and started. It might take some time to download the template and complete the VM startup. You can watch the VM’s progress in the Instances screen. -
  4. - To use the VM, click the View Console button. - ConsoleButton.png: button to launch a console - -
- Congratulations! You have successfully completed a CloudStack Installation. -
- If you decide to grow your deployment, you can add more hosts, primary storage, zones, pods, and clusters. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/inter-vlan-routing.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/inter-vlan-routing.html b/docs/tmp/en-US/epub/OEBPS/inter-vlan-routing.html deleted file mode 100644 index 11e51a6..0000000 --- a/docs/tmp/en-US/epub/OEBPS/inter-vlan-routing.html +++ /dev/null @@ -1,39 +0,0 @@ - - -15.18. About Inter-VLAN Routing

15.18. About Inter-VLAN Routing

- Inter-VLAN Routing is the capability to route network traffic between VLANs. This feature enables you to build Virtual Private Clouds (VPC), an isolated segment of your cloud, that can hold multi-tier applications. These tiers are deployed on different VLANs that can communicate with each other. You provision VLANs to the tiers your create, and VMs can be deployed on different tiers. The VLANs are connected to a virtual router, which facilitates communication between the VMs. In effect, you can segment VMs by means of VLANs into different networks that can host multi-tier applications, such as Web, Application, or Database. Such segmentation by means of VLANs logically separate application VMs for higher security and lower broadcasts, while remaining physically connected to the same device. -
- This feature is supported on XenServer and VMware hypervisors. -
- The major advantages are: -
  • - The administrator can deploy a set of VLANs and allow users to deploy VMs on these VLANs. A guest VLAN is randomly alloted to an account from a pre-specified set of guest VLANs. All the VMs of a certain tier of an account reside on the guest VLAN allotted to that account. -

    Note

    - A VLAN allocated for an account cannot be shared between multiple accounts. -
  • - The administrator can allow users create their own VPC and deploy the application. In this scenario, the VMs that belong to the account are deployed on the VLANs allotted to that account. -
  • - Both administrators and users can create multiple VPCs. The guest network NIC is plugged to the VPC virtual router when the first VM is deployed in a tier. -
  • - The administrator can create the following gateways to send to or receive traffic from the VMs: -
  • - Both administrators and users can create various possible destinations-gateway combinations. However, only one gateway of each type can be used in a deployment. -
    - For example: -
    • - VLANs and Public Gateway: For example, an application is deployed in the cloud, and the Web application VMs communicate with the Internet. -
    • - VLANs, VPN Gateway, and Public Gateway: For example, an application is deployed in the cloud; the Web application VMs communicate with the Internet; and the database VMs communicate with the on-premise devices. -
  • - The administrator can define Access Control List (ACL) on the virtual router to filter the traffic among the VLANs or between the Internet and a VLAN. You can define ACL based on CIDR, port range, protocol, type code (if ICMP protocol is selected) and Ingress/Egress type. -
- The following figure shows the possible deployment scenarios of a Inter-VLAN setup: -
mutltier.png: a multi-tier setup.
- To set up a multi-tier Inter-VLAN deployment, see Section 15.19, “Configuring a Virtual Private Cloud”. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/ip-forwarding-firewalling.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/ip-forwarding-firewalling.html b/docs/tmp/en-US/epub/OEBPS/ip-forwarding-firewalling.html deleted file mode 100644 index acece5d..0000000 --- a/docs/tmp/en-US/epub/OEBPS/ip-forwarding-firewalling.html +++ /dev/null @@ -1,9 +0,0 @@ - - -15.14. IP Forwarding and Firewalling

15.14. IP Forwarding and Firewalling

- By default, all incoming traffic to the public IP address is rejected. All outgoing traffic from the guests is translated via NAT to the public IP address and is allowed. -
- To allow incoming traffic, users may set up firewall rules and/or port forwarding rules. For example, you can use a firewall rule to open a range of ports on the public IP address, such as 33 through 44. Then use port forwarding rules to direct traffic from individual ports within that range to specific ports on user VMs. For example, one port forwarding rule could route incoming traffic on the public IP's port 33 to port 100 on one user VM's private IP. -
- For the steps to implement these rules, see Firewall Rules and Port Forwarding. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/ip-load-balancing.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/ip-load-balancing.html b/docs/tmp/en-US/epub/OEBPS/ip-load-balancing.html deleted file mode 100644 index bd9ebac..0000000 --- a/docs/tmp/en-US/epub/OEBPS/ip-load-balancing.html +++ /dev/null @@ -1,13 +0,0 @@ - - -15.15. IP Load Balancing

15.15. IP Load Balancing

- The user may choose to associate the same public IP for multiple guests. CloudStack implements a TCP-level load balancer with the following policies. -
  • - Round-robin -
  • - Least connection -
  • - Source IP -
- This is similar to port forwarding but the destination may be multiple IP addresses. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/load-balancer-rules.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/load-balancer-rules.html b/docs/tmp/en-US/epub/OEBPS/load-balancer-rules.html deleted file mode 100644 index e70de79..0000000 --- a/docs/tmp/en-US/epub/OEBPS/load-balancer-rules.html +++ /dev/null @@ -1,7 +0,0 @@ - - -15.9. Load Balancer Rules

15.9. Load Balancer Rules

- A CloudStack user or administrator may create load balancing rules that balance traffic received at a public IP to one or more VMs. A user creates a rule, specifies an algorithm, and assigns the rule to a set of VMs. -

Note

- If you create load balancing rules while using a network service offering that includes an external load balancer device such as NetScaler, and later change the network service offering to one that uses the CloudStack virtual router, you must create a firewall rule on the virtual router for each of your existing load balancing rules so that they continue to function. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/maintain-hypervisors-on-hosts.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/maintain-hypervisors-on-hosts.html b/docs/tmp/en-US/epub/OEBPS/maintain-hypervisors-on-hosts.html deleted file mode 100644 index 1d47b40..0000000 --- a/docs/tmp/en-US/epub/OEBPS/maintain-hypervisors-on-hosts.html +++ /dev/null @@ -1,9 +0,0 @@ - - -11.6. Maintaining Hypervisors on Hosts

11.6. Maintaining Hypervisors on Hosts

- When running hypervisor software on hosts, be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendor’s support channel, and apply patches as soon as possible after they are released. CloudStack will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches. -

Note

- The lack of up-do-date hotfixes can lead to data corruption and lost VMs. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/manage-cloud.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/manage-cloud.html b/docs/tmp/en-US/epub/OEBPS/manage-cloud.html deleted file mode 100644 index 2a43adb..0000000 --- a/docs/tmp/en-US/epub/OEBPS/manage-cloud.html +++ /dev/null @@ -1,48 +0,0 @@ - - -Chapter 18. Managing the Cloud

Chapter 18. Managing the Cloud

18.1. Using Tags to Organize Resources in the Cloud

- A tag is a key-value pair that stores metadata about a resource in the cloud. Tags are useful for categorizing resources. For example, you can tag a user VM with a value that indicates the user's city of residence. In this case, the key would be "city" and the value might be "Toronto" or "Tokyo." You can then request CloudStack to find all resources that have a given tag; for example, VMs for users in a given city. -
- You can tag a user virtual machine, volume, snapshot, guest network, template, ISO, firewall rule, port forwarding rule, public IP address, security group, load balancer rule, project, VPC, network ACL, or static route. You can not tag a remote access VPN. -
- You can work with tags through the UI or through the API commands createTags, deleteTags, and listTags. You can define multiple tags for each resource. There is no limit on the number of tags you can define. Each tag can be up to 255 characters long. Users can define tags on the resources they own, and administrators can define tags on any resources in the cloud. -
- An optional input parameter, "tags," exists on many of the list* API commands. The following example shows how to use this new parameter to find all the volumes having tag region=canada OR tag city=Toronto: -
command=listVolumes
-				&listAll=true
-				&tags[0].key=region
-				&tags[0].value=canada
-				&tags[1].key=city
-				&tags[1].value=Toronto
- The following API commands have the "tags" input parameter: -
  • - listVirtualMachines -
  • - listVolumes -
  • - listSnapshots -
  • - listNetworks -
  • - listTemplates -
  • - listIsos -
  • - listFirewallRules -
  • - listPortForwardingRules -
  • - listPublicIpAddresses -
  • - listSecurityGroups -
  • - listLoadBalancerRules -
  • - listProjects -
  • - listVPCs -
  • - listNetworkACLs -
  • - listStaticRoutes -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/manual-live-migration.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/manual-live-migration.html b/docs/tmp/en-US/epub/OEBPS/manual-live-migration.html deleted file mode 100644 index b0b4f5b..0000000 --- a/docs/tmp/en-US/epub/OEBPS/manual-live-migration.html +++ /dev/null @@ -1,31 +0,0 @@ - - -10.9. Moving VMs Between Hosts (Manual Live Migration)

10.9. Moving VMs Between Hosts (Manual Live Migration)

- The CloudPlatform administrator can move a running VM from one host to another without interrupting service to users or going into maintenance mode. This is called manual live migration, and can be done under the following conditions: -
  • - The root administrator is logged in. Domain admins and users can not perform manual live migration of VMs. -
  • - The VM is running. Stopped VMs can not be live migrated. -
  • - The destination host must be in the same cluster as the original host. -
  • - The VM must not be using local disk storage. -
  • - The destination host must have enough available capacity. If not, the VM will remain in the "migrating" state until memory becomes available. -
- To manually live migrate a virtual machine -
  1. - Log in to the CloudPlatform UI as a user or admin. -
  2. - In the left navigation, click Instances. -
  3. - Choose the VM that you want to migrate. -
  4. - Click the Migrate Instance button - Migrateinstance.png: button to migrate an instance - -
  5. - From the list of hosts, choose the one to which you want to move the VM. -
  6. - Click OK. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/multiple-system-vm-vmware.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/multiple-system-vm-vmware.html b/docs/tmp/en-US/epub/OEBPS/multiple-system-vm-vmware.html deleted file mode 100644 index 3ee805b..0000000 --- a/docs/tmp/en-US/epub/OEBPS/multiple-system-vm-vmware.html +++ /dev/null @@ -1,5 +0,0 @@ - - -16.2. Multiple System VM Support for VMware

16.2. Multiple System VM Support for VMware

- Every CloudStack zone has single System VM for template processing tasks such as downloading templates, uploading templates, and uploading ISOs. In a zone where VMware is being used, additional System VMs can be launched to process VMware-specific tasks such as taking snapshots and creating private templates. The CloudStack management server launches additional System VMs for VMware-specific tasks as the load increases. The management server monitors and weights all commands sent to these System VMs and performs dynamic load balancing and scaling-up of more System VMs. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/network-offerings.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/network-offerings.html b/docs/tmp/en-US/epub/OEBPS/network-offerings.html deleted file mode 100644 index c8f88c0..0000000 --- a/docs/tmp/en-US/epub/OEBPS/network-offerings.html +++ /dev/null @@ -1,37 +0,0 @@ - - -9.4. Network Offerings

9.4. Network Offerings

Note

- For the most up-to-date list of supported network services, see the CloudPlatform UI or call listNetworkServices. -
- A network offering is a named set of network services, such as: -
  • - DHCP -
  • - DNS -
  • - Source NAT -
  • - Static NAT -
  • - Port Forwarding -
  • - Load Balancing -
  • - Firewall -
  • - VPN -
  • - Optional) Name one of several available providers to use for a given service, such as Juniper for the firewall -
  • - (Optional) Network tag to specify which physical network to use -
- When creating a new VM, the user chooses one of the available network offerings, and that determines which network services the VM can use. -
- The CloudPlatform administrator can create any number of custom network offerings, in addition to the default network offerings provided by CloudPlatform. By creating multiple custom network offerings, you can set up your cloud to offer different classes of service on a single multi-tenant physical network. For example, while the underlying physical wiring may be the same for two tenants, tenant A may only need simple firewall protection for their website, while tenant B may be running a web server farm and require a scalable firewall solution, load balancing solution, and alternate networks for accessing the database backend. -

Note

- If you create load balancing rules while using a network service offering that includes an external load balancer device such as NetScaler, and later change the network service offering to one that uses the CloudPlatform virtual router, you must create a firewall rule on the virtual router for each of your existing load balancing rules so that they continue to function. -
- When creating a new virtual network, the CloudPlatform administrator chooses which network offering to enable for that network. Each virtual network is associated with one network offering. A virtual network can be upgraded or downgraded by changing its associated network offering. If you do this, be sure to reprogram the physical network to match. -
- CloudPlatform also has internal network offerings for use by CloudPlatform system VMs. These network offerings are not visible to users but can be modified by administrators. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/network-service-providers.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/network-service-providers.html b/docs/tmp/en-US/epub/OEBPS/network-service-providers.html deleted file mode 100644 index a1cdbff..0000000 --- a/docs/tmp/en-US/epub/OEBPS/network-service-providers.html +++ /dev/null @@ -1,13 +0,0 @@ - - -9.3. Network Service Providers

9.3. Network Service Providers

Note

- For the most up-to-date list of supported network service providers, see the CloudPlatform UI or call listNetworkServiceProviders. -
- A service provider (also called a network element) is hardware or virtual appliance that makes a network service possible; for example, a firewall appliance can be installed in the cloud to provide firewall service. On a single network, multiple providers can provide the same network service. For example, a firewall service may be provided by Cisco or Juniper devices in the same physical network. -
- You can have multiple instances of the same service provider in a network (say, more than one Juniper SRX device). -
- If different providers are set up to provide the same service on the network, the administrator can create network offerings so users can specify which network service provider they prefer (along with the other choices offered in network offerings). Otherwise, CloudPlatform will choose which provider to use whenever the service is called for. -
Supported Network Service Providers
- CloudPlatform ships with an internal list of the supported service providers, and you can choose from this list when creating a network offering. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/networking-in-a-pod.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/networking-in-a-pod.html b/docs/tmp/en-US/epub/OEBPS/networking-in-a-pod.html deleted file mode 100644 index 2e26c83..0000000 --- a/docs/tmp/en-US/epub/OEBPS/networking-in-a-pod.html +++ /dev/null @@ -1,15 +0,0 @@ - - -15.2. Networking in a Pod

15.2. Networking in a Pod

- Figure 2 illustrates network setup within a single pod. The hosts are connected to a pod-level switch. At a minimum, the hosts should have one physical uplink to each switch. Bonded NICs are supported as well. The pod-level switch is a pair of redundant gigabit switches with 10 G uplinks. -
networking-in-a-pod.png: Network setup in a pod
- Servers are connected as follows: -
  • - Storage devices are connected to only the network that carries management traffic. -
  • - Hosts are connected to networks for both management traffic and public traffic. -
  • - Hosts are also connected to one or more networks carrying guest traffic. -
- We recommend the use of multiple physical Ethernet cards to implement each network interface as well as redundant switch fabric in order to maximize throughput and improve reliability. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/networking-in-a-zone.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/networking-in-a-zone.html b/docs/tmp/en-US/epub/OEBPS/networking-in-a-zone.html deleted file mode 100644 index 121f606..0000000 --- a/docs/tmp/en-US/epub/OEBPS/networking-in-a-zone.html +++ /dev/null @@ -1,9 +0,0 @@ - - -15.3. Networking in a Zone

15.3. Networking in a Zone

- Figure 3 illustrates the network setup within a single zone. -
networking-in-a-zone.png: Network setup in a single zone
- A firewall for management traffic operates in the NAT mode. The network typically is assigned IP addresses in the 192.168.0.0/16 Class B private address space. Each pod is assigned IP addresses in the 192.168.*.0/24 Class C private address space. -
- Each zone has its own set of public IP addresses. Public IP addresses from different zones do not overlap. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/networks.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/networks.html b/docs/tmp/en-US/epub/OEBPS/networks.html deleted file mode 100644 index 7e97caf..0000000 --- a/docs/tmp/en-US/epub/OEBPS/networks.html +++ /dev/null @@ -1,15 +0,0 @@ - - -Chapter 15. Managing Networks and Traffic

Chapter 15. Managing Networks and Traffic

- In a CloudStack, guest VMs can communicate with each other using shared infrastructure with the security and user perception that the guests have a private LAN. The CloudStack virtual router is the main component providing networking features for guest traffic. -

15.1. Guest Traffic

- A network can carry guest traffic only between VMs within one zone. Virtual machines in different zones cannot communicate with each other using their IP addresses; they must communicate with each other by routing through a public IP address. -
- Figure 1 illustrates a typical guest traffic setup: -
guesttraffic.png: Depicts a guest traffic setup
- The Management Server automatically creates a virtual router for each network. A virtual router is a special virtual machine that runs on the hosts. Each virtual router has three network interfaces. Its eth0 interface serves as the gateway for the guest traffic and has the IP address of 10.1.1.1. Its eth1 interface is used by the system to configure the virtual router. Its eth2 interface is assigned a public IP address for public traffic. -
- The virtual router provides DHCP and will automatically assign an IP address for each guest VM within the IP range assigned for the network. The user can manually reconfigure guest VMs to assume different IP addresses. -
- Source NAT is automatically configured in the virtual router to forward outbound traffic for all guest VMs -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/offerings.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/offerings.html b/docs/tmp/en-US/epub/OEBPS/offerings.html deleted file mode 100644 index c1152bd..0000000 --- a/docs/tmp/en-US/epub/OEBPS/offerings.html +++ /dev/null @@ -1,99 +0,0 @@ - - -Chapter 8. Service Offerings

Chapter 8. Service Offerings

- In this chapter we discuss compute, disk, and system service offerings. Network offerings are discussed in the section on setting up networking for users. -

8.1. Compute and Disk Service Offerings

- A service offering is a set of virtual hardware features such as CPU core count and speed, memory, and disk size. The CloudStack administrator can set up various offerings, and then end users choose from the available offerings when they create a new VM. A service offering includes the following elements: -
  • - CPU, memory, and network resource guarantees -
  • - How resources are metered -
  • - How the resource usage is charged -
  • - How often the charges are generated -
- For example, one service offering might allow users to create a virtual machine instance that is equivalent to a 1 GHz Intel® Core™ 2 CPU, with 1 GB memory at $0.20/hour, with network traffic metered at $0.10/GB. Based on the user’s selected offering, CloudStack emits usage records that can be integrated with billing systems. CloudStack separates service offerings into compute offerings and disk offerings. The computing service offering specifies: -
  • - Guest CPU -
  • - Guest RAM -
  • - Guest Networking type (virtual or direct) -
  • - Tags on the root disk -
- The disk offering specifies: -
  • - Disk size (optional). An offering without a disk size will allow users to pick their own -
  • - Tags on the data disk -

8.1.1. Creating a New Compute Offering

- To create a new compute offering: -
  1. - Log in with admin privileges to the CloudStack UI. -
  2. - In the left navigation bar, click Service Offerings. -
  3. - In Select Offering, choose Compute Offering. -
  4. - Click Add Compute Offering. -
  5. - In the dialog, make the following choices: -
    • - Name: Any desired name for the service offering. -
    • - Description: A short description of the offering that can be displayed to users -
    • - Storage type: The type of disk that should be allocated. Local allocates from storage attached directly to the host where the system VM is running. Shared allocates from storage accessible via NFS. -
    • - # of CPU cores: The number of cores which should be allocated to a system VM with this offering -
    • - CPU (in MHz): The CPU speed of the cores that the system VM is allocated. For example, “2000” would provide for a 2 GHz clock. -
    • - Memory (in MB): The amount of memory in megabytes that the system VM should be allocated. For example, “2048” would provide for a 2 GB RAM allocation. -
    • - Network Rate: Allowed data transfer rate in MB per second. -
    • - Offer HA: If yes, the administrator can choose to have the system VM be monitored and as highly available as possible. -
    • - Storage Tags: The tags that should be associated with the primary storage used by the system VM. -
    • - Host Tags: (Optional) Any tags that you use to organize your hosts -
    • - CPU cap: Whether to limit the level of CPU usage even if spare capacity is available. -
    • - Public: Indicate whether the service offering should be available all domains or only some domains. Choose Yes to make it available to all domains. Choose No to limit the scope to a subdomain; CloudStack will then prompt for the subdomain's name. -
  6. - Click Add. -

8.1.2. Creating a New Disk Offering

- To create a system service offering: -
  1. - Log in with admin privileges to the CloudStack UI. -
  2. - In the left navigation bar, click Service Offerings. -
  3. - In Select Offering, choose Disk Offering. -
  4. - Click Add Disk Offering. -
  5. - In the dialog, make the following choices: -
    • - Name. Any desired name for the system offering. -
    • - Description. A short description of the offering that can be displayed to users -
    • - Custom Disk Size. If checked, the user can set their own disk size. If not checked, the root administrator must define a value in Disk Size. -
    • - Disk Size. Appears only if Custom Disk Size is not selected. Define the volume size in GB. -
    • - (Optional)Storage Tags. The tags that should be associated with the primary storage for this disk. Tags are a comma separated list of attributes of the storage. For example "ssd,blue". Tags are also added on Primary Storage. CloudStack matches tags on a disk offering to tags on the storage. If a tag is present on a disk offering that tag (or tags) must also be present on Primary Storage for the volume to be provisioned. If no such primary storage exists, allocation from the disk offering will fail.. -
    • - Public. Indicate whether the service offering should be available all domains or only some domains. Choose Yes to make it available to all domains. Choose No to limit the scope to a subdomain; CloudStack will then prompt for the subdomain's name. -
  6. - Click Add. -

8.1.3. Modifying or Deleting a Service Offering

- Service offerings cannot be changed once created. This applies to both compute offerings and disk offerings. -
- A service offering can be deleted. If it is no longer in use, it is deleted immediately and permanently. If the service offering is still in use, it will remain in the database until all the virtual machines referencing it have been deleted. After deletion by the administrator, a service offering will not be available to end users that are creating new instances. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/per-domain-limits.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/per-domain-limits.html b/docs/tmp/en-US/epub/OEBPS/per-domain-limits.html deleted file mode 100644 index d79f019..0000000 --- a/docs/tmp/en-US/epub/OEBPS/per-domain-limits.html +++ /dev/null @@ -1,16 +0,0 @@ - - -14.5. Per-Domain Limits

14.5. Per-Domain Limits

- CloudStack allows the configuration of limits on a domain basis. With a domain limit in place, all users still have their account limits. They are additionally limited, as a group, to not exceed the resource limits set on their domain. Domain limits aggregate the usage of all accounts in the domain as well as all accounts in all subdomains of that domain. Limits set at the root domain level apply to the sum of resource usage by the accounts in all domains and sub-domains below that root domain. -
- To set a domain limit: -
  1. - Log in to the CloudStack UI. -
  2. - In the left navigation tree, click Domains. -
  3. - Select the domain you want to modify. The current domain limits are displayed. A value of -1 shows that there is no limit in place. -
  4. - Click the Edit button - editbutton.png: edits the settings. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/pod-add.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/pod-add.html b/docs/tmp/en-US/epub/OEBPS/pod-add.html deleted file mode 100644 index 3a8ab1e..0000000 --- a/docs/tmp/en-US/epub/OEBPS/pod-add.html +++ /dev/null @@ -1,25 +0,0 @@ - - -7.3. Adding a Pod

7.3. Adding a Pod

- When you created a new zone, CloudStack adds the first pod for you. You can add more pods at any time using the procedure in this section. -
  1. - Log in to the CloudStack UI. See Section 5.1, “Log In to the UI”. -
  2. - In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone to which you want to add a pod. -
  3. - Click the Compute and Storage tab. In the Pods node of the diagram, click View All. -
  4. - Click Add Pod. -
  5. - Enter the following details in the dialog. -
    • - Name. The name of the pod. -
    • - Gateway. The gateway for the hosts in that pod. -
    • - Netmask. The network prefix that defines the pod's subnet. Use CIDR notation. -
    • - Start/End Reserved System IP. The IP range in the management network that CloudStack uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see System Reserved IP Addresses. -
  6. - Click OK. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/primary-storage-add.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/primary-storage-add.html b/docs/tmp/en-US/epub/OEBPS/primary-storage-add.html deleted file mode 100644 index 157f8d5..0000000 --- a/docs/tmp/en-US/epub/OEBPS/primary-storage-add.html +++ /dev/null @@ -1,63 +0,0 @@ - - -7.6. Add Primary Storage

7.6. Add Primary Storage

7.6.1. System Requirements for Primary Storage

- Hardware requirements: -
  • - Any standards-compliant iSCSI or NFS server that is supported by the underlying hypervisor. -
  • - The storage server should be a machine with a large number of disks. The disks should ideally be managed by a hardware RAID controller. -
  • - Minimum required capacity depends on your needs. -
- When setting up primary storage, follow these restrictions: -
  • - Primary storage cannot be added until a host has been added to the cluster. -
  • - If you do not provision shared primary storage, you must set the global configuration parameter system.vm.local.storage.required to true, or else you will not be able to start VMs. -

7.6.2. Adding Primary Stroage

- When you create a new zone, the first primary storage is added as part of that procedure. You can add primary storage servers at any time, such as when adding a new cluster or adding more servers to an existing cluster. -

Warning

- Be sure there is nothing stored on the server. Adding the server to CloudStack will destroy any existing data. -
  1. - Log in to the CloudStack UI (see Section 5.1, “Log In to the UI”). -
  2. - In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the primary storage. -
  3. - Click the Compute tab. -
  4. - In the Primary Storage node of the diagram, click View All. -
  5. - Click Add Primary Storage. -
  6. - Provide the following information in the dialog. The information required varies depending on your choice in Protocol. -
    • - Pod. The pod for the storage device. -
    • - Cluster. The cluster for the storage device. -
    • - Name. The name of the storage device. -
    • - Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS or SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. -
    • - Server (for NFS, iSCSI, or PreSetup). The IP address or DNS name of the storage device. -
    • - Server (for VMFS). The IP address or DNS name of the vCenter server. -
    • - Path (for NFS). In NFS this is the exported path from the server. -
    • - Path (for VMFS). In vSphere this is a combination of the datacenter name and the datastore name. The format is "/" datacenter name "/" datastore name. For example, "/cloud.dc.VM/cluster1datastore". -
    • - Path (for SharedMountPoint). With KVM this is the path on each host that is where this primary storage is mounted. For example, "/mnt/primary". -
    • - SR Name-Label (for PreSetup). Enter the name-label of the SR that has been set up outside CloudStack. -
    • - Target IQN (for iSCSI). In iSCSI this is the IQN of the target. For example, iqn.1986-03.com.sun:02:01ec9bb549-1271378984. -
    • - Lun # (for iSCSI). In iSCSI this is the LUN number. For example, 3. -
    • - Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.. -
    - The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2. -
  7. - Click OK. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/primary-storage-outage-and-data-loss.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/primary-storage-outage-and-data-loss.html b/docs/tmp/en-US/epub/OEBPS/primary-storage-outage-and-data-loss.html deleted file mode 100644 index 59ff847..0000000 --- a/docs/tmp/en-US/epub/OEBPS/primary-storage-outage-and-data-loss.html +++ /dev/null @@ -1,5 +0,0 @@ - - -17.4. Primary Storage Outage and Data Loss

17.4. Primary Storage Outage and Data Loss

- When a primary storage outage occurs the hypervisor immediately stops all VMs stored on that storage device. Guests that are marked for HA will be restarted as soon as practical when the primary storage comes back on line. With NFS, the hypervisor may allow the virtual machines to continue running depending on the nature of the issue. For example, an NFS hang will cause the guest VMs to be suspended until storage connectivity is restored.Primary storage is not designed to be backed up. Individual volumes in primary storage can be backed up using snapshots. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/primary-storage.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/primary-storage.html b/docs/tmp/en-US/epub/OEBPS/primary-storage.html deleted file mode 100644 index 362302f..0000000 --- a/docs/tmp/en-US/epub/OEBPS/primary-storage.html +++ /dev/null @@ -1,5 +0,0 @@ - - -13.2. Primary Storage

13.2. Primary Storage

- This section gives concepts and technical details about CloudPlatform primary storage. For information about how to install and configure primary storage through the CloudPlatform UI, see the Advanced Installation Guide. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/private-public-template.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/private-public-template.html b/docs/tmp/en-US/epub/OEBPS/private-public-template.html deleted file mode 100644 index b62d475..0000000 --- a/docs/tmp/en-US/epub/OEBPS/private-public-template.html +++ /dev/null @@ -1,9 +0,0 @@ - - -12.5. Private and Public Templates

12.5. Private and Public Templates

- When a user creates a template, it can be designated private or public. -
- Private templates are only available to the user who created them. By default, an uploaded template is private. -
- When a user marks a template as “public,” the template becomes available to all users in all accounts in the user's domain, as well as users in any other domains that have access to the Zone where the template is stored. This depends on whether the Zone, in turn, was defined as private or public. A private Zone is assigned to a single domain, and a public Zone is accessible to any domain. If a public template is created in a private Zone, it is available only to users in the domain assigned to that Zone. If a public template is created in a public Zone, it is available to all users in all domains. -
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/d8e31c7a/docs/tmp/en-US/epub/OEBPS/projects.html ---------------------------------------------------------------------- diff --git a/docs/tmp/en-US/epub/OEBPS/projects.html b/docs/tmp/en-US/epub/OEBPS/projects.html deleted file mode 100644 index fa50e80..0000000 --- a/docs/tmp/en-US/epub/OEBPS/projects.html +++ /dev/null @@ -1,11 +0,0 @@ - - -Chapter 6. Using Projects to Organize Users and Resources

Chapter 6. Using Projects to Organize Users and Resources

6.1. Overview of Projects

- Projects are used to organize people and resources. CloudStack users within a single domain can group themselves into project teams so they can collaborate and share virtual resources such as VMs, snapshots, templates, data disks, and IP addresses. CloudStack tracks resource usage per project as well as per user, so the usage can be billed to either a user account or a project. For example, a private cloud within a software company might have all members of the QA department assigned to one project, so the company can track the resources used in testing while the project members can more easily isolate their efforts from other users of the same cloud -
- You can configure CloudStack to allow any user to create a new project, or you can restrict that ability to just CloudStack administrators. Once you have created a project, you become that project’s administrator, and you can add others within your domain to the project. CloudStack can be set up either so that you can add people directly to a project, or so that you have to send an invitation which the recipient must accept. Project members can view and manage all virtual resources created by anyone in the project (for example, share VMs). A user can be a member of any number of projects and can switch views in the CloudStack UI to show only project-related information, such as project VMs, fellow project members, project-related alerts, and so on. -
- The project administrator can pass on the role to another project member. The project administrator can also add more members, remove members from the project, set new resource limits (as long as they are below the global defaults set by the CloudStack administrator), and delete the project. When the administrator removes a member from the project, resources created by that user, such as VM instances, remain with the project. This brings us to the subject of resource ownership and which resources can be used by a project. -
- Resources created within a project are owned by the project, not by any particular CloudStack account, and they can be used only within the project. A user who belongs to one or more projects can still create resources outside of those projects, and those resources belong to the user’s account; they will not be counted against the project’s usage or resource limits. You can create project-level networks to isolate traffic within the project and provide network services such as port forwarding, load balancing, VPN, and static NAT. A project can also make use of certain types of resources from outside the project, if those resources are shared. For example, a shared network or public template is available to any project in the domain. A project can get access to a private template if the template’s owner will grant permission. A project can use any service offering or disk offering available in its domain; however, you can not create private service and disk offerings at the p roject level.. -