Return-Path: X-Original-To: apmail-incubator-cloudstack-commits-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 284B1D801 for ; Tue, 7 Aug 2012 22:14:19 +0000 (UTC) Received: (qmail 43710 invoked by uid 500); 7 Aug 2012 22:14:19 -0000 Delivered-To: apmail-incubator-cloudstack-commits-archive@incubator.apache.org Received: (qmail 43688 invoked by uid 500); 7 Aug 2012 22:14:19 -0000 Mailing-List: contact cloudstack-commits-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-commits@incubator.apache.org Received: (qmail 43677 invoked by uid 99); 7 Aug 2012 22:14:19 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 07 Aug 2012 22:14:19 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id BC74E1C686; Tue, 7 Aug 2012 22:14:18 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: ke4qqq@apache.org To: cloudstack-commits@incubator.apache.org X-Mailer: ASF-Git Admin Mailer Subject: git commit: DOCS: h-license headers and entity usage - see https://reviews.apache.org/r/6450/ Message-Id: <20120807221418.BC74E1C686@tyr.zones.apache.org> Date: Tue, 7 Aug 2012 22:14:18 +0000 (UTC) Updated Branches: refs/heads/master 06c38fd2f -> b4432e305 DOCS: h-license headers and entity usage - see https://reviews.apache.org/r/6450/ Project: http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/repo Commit: http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/commit/b4432e30 Tree: http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/tree/b4432e30 Diff: http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/diff/b4432e30 Branch: refs/heads/master Commit: b4432e305ee48031a4bc9da7cccd2b096cb6eba8 Parents: 06c38fd Author: Joe Brockmeier Authored: Tue Aug 7 18:14:44 2012 -0400 Committer: David Nalley Committed: Tue Aug 7 18:14:44 2012 -0400 ---------------------------------------------------------------------- docs/en-US/about-working-with-vms.xml | 6 +++ docs/en-US/ha-enabled-vm.xml | 25 ++++++++++- docs/en-US/ha-for-hosts.xml | 25 ++++++++++- docs/en-US/ha-management-server.xml | 23 ++++++++++- docs/en-US/host-add.xml | 23 ++++++++++- docs/en-US/host-allocation.xml | 29 +++++++++++-- .../hypervisor-support-for-primarystorage.xml | 31 ++++++++++++--- 7 files changed, 141 insertions(+), 21 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/b4432e30/docs/en-US/about-working-with-vms.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/about-working-with-vms.xml b/docs/en-US/about-working-with-vms.xml index f7f3087..920c4e8 100644 --- a/docs/en-US/about-working-with-vms.xml +++ b/docs/en-US/about-working-with-vms.xml @@ -1,3 +1,9 @@ + + +%BOOK_ENTITIES; +]> + +
HA-Enabled Virtual Machines - The user can specify a virtual machine as HA-enabled. By default, all virtual router VMs and Elastic Load Balancing VMs are automatically configured as HA-enabled. When an HA-enabled VM crashes, CloudPlatform detects the crash and restarts the VM automatically within the same Availability Zone. HA is never performed across different Availability Zones. CloudPlatform has a conservative policy towards restarting VMs and ensures that there will never be two instances of the same VM running at the same time. The Management Server attempts to start the VM on another Host in the same cluster. + The user can specify a virtual machine as HA-enabled. By default, all virtual router VMs and Elastic Load Balancing VMs are automatically configured as HA-enabled. When an HA-enabled VM crashes, &PRODUCT; detects the crash and restarts the VM automatically within the same Availability Zone. HA is never performed across different Availability Zones. &PRODUCT; has a conservative policy towards restarting VMs and ensures that there will never be two instances of the same VM running at the same time. The Management Server attempts to start the VM on another Host in the same cluster. HA features work with iSCSI or NFS primary storage. HA with local storage is not supported. -
\ No newline at end of file + http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/b4432e30/docs/en-US/ha-for-hosts.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/ha-for-hosts.xml b/docs/en-US/ha-for-hosts.xml index f555c3e..e395d22 100644 --- a/docs/en-US/ha-for-hosts.xml +++ b/docs/en-US/ha-for-hosts.xml @@ -1,10 +1,29 @@ - %BOOK_ENTITIES; ]> + + +
HA for Hosts - The user can specify a virtual machine as HA-enabled. By default, all virtual router VMs and Elastic Load Balancing VMs are automatically configured as HA-enabled. When an HA-enabled VM crashes, CloudPlatform detects the crash and restarts the VM automatically within the same Availability Zone. HA is never performed across different Availability Zones. CloudPlatform has a conservative policy towards restarting VMs and ensures that there will never be two instances of the same VM running at the same time. The Management Server attempts to start the VM on another Host in the same cluster. + The user can specify a virtual machine as HA-enabled. By default, all virtual router VMs and Elastic Load Balancing VMs are automatically configured as HA-enabled. When an HA-enabled VM crashes, &PRODUCT; detects the crash and restarts the VM automatically within the same Availability Zone. HA is never performed across different Availability Zones. &PRODUCT; has a conservative policy towards restarting VMs and ensures that there will never be two instances of the same VM running at the same time. The Management Server attempts to start the VM on another Host in the same cluster. HA features work with iSCSI or NFS primary storage. HA with local storage is not supported. -
\ No newline at end of file + http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/b4432e30/docs/en-US/ha-management-server.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/ha-management-server.xml b/docs/en-US/ha-management-server.xml index 27019cc..1afebce 100644 --- a/docs/en-US/ha-management-server.xml +++ b/docs/en-US/ha-management-server.xml @@ -1,11 +1,30 @@ - %BOOK_ENTITIES; ]> + + +
HA for Management Server - The CloudPlatform Management Server should be deployed in a multi-node configuration such that it is not susceptible to individual server failures. The Management Server itself (as distinct from the MySQL database) is stateless and may be placed behind a load balancer. + The &PRODUCT; Management Server should be deployed in a multi-node configuration such that it is not susceptible to individual server failures. The Management Server itself (as distinct from the MySQL database) is stateless and may be placed behind a load balancer. Normal operation of Hosts is not impacted by an outage of all Management Serves. All guest VMs will continue to work. When the Management Server is down, no new VMs can be created, and the end user and admin UI, API, dynamic load distribution, and HA will cease to work.
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/b4432e30/docs/en-US/host-add.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/host-add.xml b/docs/en-US/host-add.xml index 7591ee5..e86760a 100644 --- a/docs/en-US/host-add.xml +++ b/docs/en-US/host-add.xml @@ -1,9 +1,28 @@ - %BOOK_ENTITIES; ]> + + +
Adding a Host TODO -
\ No newline at end of file + http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/b4432e30/docs/en-US/host-allocation.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/host-allocation.xml b/docs/en-US/host-allocation.xml index 8a56322..8a362e6 100644 --- a/docs/en-US/host-allocation.xml +++ b/docs/en-US/host-allocation.xml @@ -1,12 +1,31 @@ - %BOOK_ENTITIES; ]> + + +
Host Allocation The system automatically picks the most appropriate host to run each virtual machine. End users may specify the zone in which the virtual machine will be created. End users do not have control over which host will run the virtual machine instance. - CloudPlatform administrators can specify that certain hosts should have a preference for particular types of guest instances. For example, an administrator could state that a host should have a preference to run Windows guests. The default host allocator will attempt to place guests of that OS type on such hosts first. If no such host is available, the allocator will place the instance wherever there is sufficient physical capacity. - Both vertical and horizontal allocation is allowed. Vertical allocation consumes all the resources of a given host before allocating any guests on a second host. This reduces power consumption in the cloud. Horizontal allocation places a guest on each host in a round-robin fashion. This may yield better performance to the guests in some cases. CloudPlatform also allows an element of CPU over-provisioning as configured by the administrator. Over-provisioning allows the administrator to commit more CPU cycles to the allocated guests than are actually available from the hardware. - CloudPlatform also provides a pluggable interface for adding new allocators. These custom allocators can provide any policy the administrator desires. -
\ No newline at end of file + &PRODUCT; administrators can specify that certain hosts should have a preference for particular types of guest instances. For example, an administrator could state that a host should have a preference to run Windows guests. The default host allocator will attempt to place guests of that OS type on such hosts first. If no such host is available, the allocator will place the instance wherever there is sufficient physical capacity. + Both vertical and horizontal allocation is allowed. Vertical allocation consumes all the resources of a given host before allocating any guests on a second host. This reduces power consumption in the cloud. Horizontal allocation places a guest on each host in a round-robin fashion. This may yield better performance to the guests in some cases. &PRODUCT; also allows an element of CPU over-provisioning as configured by the administrator. Over-provisioning allows the administrator to commit more CPU cycles to the allocated guests than are actually available from the hardware. + &PRODUCT; also provides a pluggable interface for adding new allocators. These custom allocators can provide any policy the administrator desires. + http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/b4432e30/docs/en-US/hypervisor-support-for-primarystorage.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/hypervisor-support-for-primarystorage.xml b/docs/en-US/hypervisor-support-for-primarystorage.xml index e0fa56b..7c547a6 100644 --- a/docs/en-US/hypervisor-support-for-primarystorage.xml +++ b/docs/en-US/hypervisor-support-for-primarystorage.xml @@ -1,8 +1,27 @@ - %BOOK_ENTITIES; ]> + + +
Hypervisor Support for Primary Storage The following table shows storage options and parameters for different hypervisors. @@ -74,10 +93,10 @@ - XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes and does not support over-provisioning in the hypervisor. The storage server itself, however, can support thin-provisioning. As a result the CloudPlatform can still support storage over-provisioning by running on thin-provisioned storage volumes. - KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file system path local to each server in a given cluster. The path must be the same across all Hosts in the cluster, for example /mnt/primary1. This shared mountpoint is assumed to be a clustered filesystem such as OCFS2. In this case the CloudPlatform does not attempt to mount or unmount the storage as is done with NFS. The CloudPlatform requires that the administrator insure that the storage is available - Oracle VM supports both iSCSI and NFS storage. When iSCSI is used with OVM, the CloudPlatform administrator is responsible for setting up iSCSI on the host, including re-mounting the storage after the host recovers from a failure such as a network outage. With other hypervisors, CloudPlatform takes care of mounting the iSCSI target on the host whenever it discovers a connection with an iSCSI server and unmounting the target when it discovers the connection is down. - With NFS storage, CloudPlatform manages the overprovisioning. In this case the global configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning. This is independent of hypervisor type. + XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes and does not support over-provisioning in the hypervisor. The storage server itself, however, can support thin-provisioning. As a result the &PRODUCT; can still support storage over-provisioning by running on thin-provisioned storage volumes. + KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file system path local to each server in a given cluster. The path must be the same across all Hosts in the cluster, for example /mnt/primary1. This shared mountpoint is assumed to be a clustered filesystem such as OCFS2. In this case the &PRODUCT; does not attempt to mount or unmount the storage as is done with NFS. The &PRODUCT; requires that the administrator insure that the storage is available + Oracle VM supports both iSCSI and NFS storage. When iSCSI is used with OVM, the &PRODUCT; administrator is responsible for setting up iSCSI on the host, including re-mounting the storage after the host recovers from a failure such as a network outage. With other hypervisors, &PRODUCT; takes care of mounting the iSCSI target on the host whenever it discovers a connection with an iSCSI server and unmounting the target when it discovers the connection is down. + With NFS storage, &PRODUCT; manages the overprovisioning. In this case the global configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning. This is independent of hypervisor type. Local storage is an option for primary storage for vSphere, XenServer, Oracle VM, and KVM. When the local disk option is enabled, a local disk storage pool is automatically created on each host. To use local storage for the System Virtual Machines (such as the Virtual Router), set system.vm.use.local.storage to true in global configuration. - CloudPlatform supports multiple primary storage pools in a Cluster. For example, you could provision 2 NFS servers in primary storage. Or you could provision 1 iSCSI LUN initially and then add a second iSCSI LUN when the first approaches capacity. + &PRODUCT; supports multiple primary storage pools in a Cluster. For example, you could provision 2 NFS servers in primary storage. Or you could provision 1 iSCSI LUN initially and then add a second iSCSI LUN when the first approaches capacity.