Return-Path: X-Original-To: apmail-incubator-cloudstack-commits-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A1120E06B for ; Wed, 9 Jan 2013 13:26:58 +0000 (UTC) Received: (qmail 31959 invoked by uid 500); 9 Jan 2013 13:26:49 -0000 Delivered-To: apmail-incubator-cloudstack-commits-archive@incubator.apache.org Received: (qmail 31902 invoked by uid 500); 9 Jan 2013 13:26:49 -0000 Mailing-List: contact cloudstack-commits-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-commits@incubator.apache.org Received: (qmail 31109 invoked by uid 99); 9 Jan 2013 13:26:48 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Jan 2013 13:26:48 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 6F86811D3A; Wed, 9 Jan 2013 13:26:48 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: ahuang@apache.org To: cloudstack-commits@incubator.apache.org X-Mailer: ASF-Git Admin Mailer Subject: [16/50] [abbrv] Merge branch 'api_refactoring' into javelin Message-Id: <20130109132648.6F86811D3A@tyr.zones.apache.org> Date: Wed, 9 Jan 2013 13:26:48 +0000 (UTC) http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/detach-move-volumes.xml ---------------------------------------------------------------------- diff --cc docs/en-US/detach-move-volumes.xml index a902fdb,25323c9..fda6e66 --- a/docs/en-US/detach-move-volumes.xml +++ b/docs/en-US/detach-move-volumes.xml @@@ -5,42 -5,39 +5,42 @@@ ]> -
- Attaching a Volume - This procedure is different from moving disk volumes from one storage pool to another. See VM Storage Migration - A volume can be detached from a guest VM and attached to another guest. Both &PRODUCT; administrators and users can detach volumes from VMs and move them to other VMs. - If the two VMs are in different clusters, and the volume is large, it may take several minutes for the volume to be moved to the new VM. + Detaching and Moving Volumes + This procedure is different from moving disk volumes from one storage pool to another. See VM Storage Migration + A volume can be detached from a guest VM and attached to another guest. Both &PRODUCT; administrators and users can detach volumes from VMs and move them to other VMs. + If the two VMs are in different clusters, and the volume is large, it may take several minutes for the volume to be moved to the new VM. - If the destination VM is running in the OVM hypervisor, the VM must be stopped before a new volume can be attached to it. + - - Log in to the &PRODUCT; UI as a user or admin. - In the left navigation bar, click Storage, and choose Volumes in Select View. Alternatively, if you know which VM the volume is attached to, you can click Instances, click the VM name, and click View Volumes. - Click the name of the volume you want to detach, then click the Detach Disk button - - - - DetachDiskButton.png: button to detach a volume - - - To move the volume to another VM, follow the steps in Attaching a Volume . - -
+ + Log in to the &PRODUCT; UI as a user or admin. + In the left navigation bar, click Storage, and choose Volumes in Select View. Alternatively, if you know which VM the volume is attached to, you can click Instances, click the VM name, and click View Volumes. + Click the name of the volume you want to detach, then click the Detach Disk button. + + + + + DetachDiskButton.png: button to detach a volume + + + + To move the volume to another VM, follow the steps in . + + + http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/event-types.xml ---------------------------------------------------------------------- diff --cc docs/en-US/event-types.xml index 56059e1,2ccd553..5ce5857 --- a/docs/en-US/event-types.xml +++ b/docs/en-US/event-types.xml @@@ -5,215 -5,216 +5,216 @@@ ]> + or more contributor license agreements. See the NOTICE file + distributed with this work for additional information + regarding copyright ownership. The ASF licenses this file + to you under the Apache License, Version 2.0 (the + "License"); you may not use this file except in compliance + with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, + software distributed under the License is distributed on an + "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + KIND, either express or implied. See the License for the + specific language governing permissions and limitations + under the License. +--> + - Event Types - - - - - - - VM.CREATE - TEMPLATE.EXTRACT - SG.REVOKE.INGRESS - - - VM.DESTROY - TEMPLATE.UPLOAD - HOST.RECONNECT - - - VM.START - TEMPLATE.CLEANUP - MAINT.CANCEL - - - VM.STOP - VOLUME.CREATE - MAINT.CANCEL.PS - - - VM.REBOOT - VOLUME.DELETE - MAINT.PREPARE - - - VM.UPGRADE - VOLUME.ATTACH - MAINT.PREPARE.PS - - - VM.RESETPASSWORD - VOLUME.DETACH - VPN.REMOTE.ACCESS.CREATE - - - ROUTER.CREATE - VOLUME.UPLOAD - VPN.USER.ADD - - - ROUTER.DESTROY - SERVICEOFFERING.CREATE - VPN.USER.REMOVE - - - ROUTER.START - SERVICEOFFERING.UPDATE - NETWORK.RESTART - - - ROUTER.STOP - SERVICEOFFERING.DELETE - UPLOAD.CUSTOM.CERTIFICATE - - - ROUTER.REBOOT - DOMAIN.CREATE - UPLOAD.CUSTOM.CERTIFICATE - - - ROUTER.HA - DOMAIN.DELETE - STATICNAT.DISABLE - - - PROXY.CREATE - DOMAIN.UPDATE - SSVM.CREATE - - - PROXY.DESTROY - SNAPSHOT.CREATE - SSVM.DESTROY - - - PROXY.START - SNAPSHOT.DELETE - SSVM.START - - - PROXY.STOP - SNAPSHOTPOLICY.CREATE - SSVM.STOP - - - PROXY.REBOOT - SNAPSHOTPOLICY.UPDATE - SSVM.REBOOT - - - PROXY.HA - SNAPSHOTPOLICY.DELETE - SSVM.H - - - VNC.CONNECT - VNC.DISCONNECT - NET.IPASSIGN - - - NET.IPRELEASE - NET.RULEADD - NET.RULEDELETE - - - NET.RULEMODIFY - NETWORK.CREATE - NETWORK.DELETE - - - LB.ASSIGN.TO.RULE - LB.REMOVE.FROM.RULE - LB.CREATE - - - LB.DELETE - LB.UPDATE - USER.LOGIN - - - USER.LOGOUT - USER.CREATE - USER.DELETE - - - USER.UPDATE - USER.DISABLE - TEMPLATE.CREATE - - - TEMPLATE.DELETE - TEMPLATE.UPDATE - TEMPLATE.COPY - - - TEMPLATE.DOWNLOAD.START - TEMPLATE.DOWNLOAD.SUCCESS - TEMPLATE.DOWNLOAD.FAILED - - - ISO.CREATE - ISO.DELETE - ISO.COPY - - - ISO.ATTACH - ISO.DETACH - ISO.EXTRACT - - - ISO.UPLOAD - SERVICE.OFFERING.CREATE - SERVICE.OFFERING.EDIT - - - SERVICE.OFFERING.DELETE - DISK.OFFERING.CREATE - DISK.OFFERING.EDIT - - - DISK.OFFERING.DELETE - NETWORK.OFFERING.CREATE - NETWORK.OFFERING.EDIT - - - NETWORK.OFFERING.DELETE - POD.CREATE - POD.EDIT - - - POD.DELETE - ZONE.CREATE - ZONE.EDIT - - - ZONE.DELETE - VLAN.IP.RANGE.CREATE - VLAN.IP.RANGE.DELETE - - - CONFIGURATION.VALUE.EDIT - SG.AUTH.INGRESS - - - - - + Event Types + + + + + + + VM.CREATE + TEMPLATE.EXTRACT + SG.REVOKE.INGRESS + + + VM.DESTROY + TEMPLATE.UPLOAD + HOST.RECONNECT + + + VM.START + TEMPLATE.CLEANUP + MAINT.CANCEL + + + VM.STOP + VOLUME.CREATE + MAINT.CANCEL.PS + + + VM.REBOOT + VOLUME.DELETE + MAINT.PREPARE + + + VM.UPGRADE + VOLUME.ATTACH + MAINT.PREPARE.PS + + + VM.RESETPASSWORD + VOLUME.DETACH + VPN.REMOTE.ACCESS.CREATE + + + ROUTER.CREATE + VOLUME.UPLOAD + VPN.USER.ADD + + + ROUTER.DESTROY + SERVICEOFFERING.CREATE + VPN.USER.REMOVE + + + ROUTER.START + SERVICEOFFERING.UPDATE + NETWORK.RESTART + + + ROUTER.STOP + SERVICEOFFERING.DELETE + UPLOAD.CUSTOM.CERTIFICATE + + + ROUTER.REBOOT + DOMAIN.CREATE + UPLOAD.CUSTOM.CERTIFICATE + + + ROUTER.HA + DOMAIN.DELETE + STATICNAT.DISABLE + + + PROXY.CREATE + DOMAIN.UPDATE + SSVM.CREATE + + + PROXY.DESTROY + SNAPSHOT.CREATE + SSVM.DESTROY + + + PROXY.START + SNAPSHOT.DELETE + SSVM.START + + + PROXY.STOP + SNAPSHOTPOLICY.CREATE + SSVM.STOP + + + PROXY.REBOOT + SNAPSHOTPOLICY.UPDATE + SSVM.REBOOT + + + PROXY.HA + SNAPSHOTPOLICY.DELETE + SSVM.H + + + VNC.CONNECT + VNC.DISCONNECT + NET.IPASSIGN + + + NET.IPRELEASE + NET.RULEADD + NET.RULEDELETE + + + NET.RULEMODIFY + NETWORK.CREATE + NETWORK.DELETE + + + LB.ASSIGN.TO.RULE + LB.REMOVE.FROM.RULE + LB.CREATE + + + LB.DELETE + LB.UPDATE + USER.LOGIN + + + USER.LOGOUT + USER.CREATE + USER.DELETE + + + USER.UPDATE + USER.DISABLE + TEMPLATE.CREATE + + + TEMPLATE.DELETE + TEMPLATE.UPDATE + TEMPLATE.COPY + + + TEMPLATE.DOWNLOAD.START + TEMPLATE.DOWNLOAD.SUCCESS + TEMPLATE.DOWNLOAD.FAILED + + + ISO.CREATE + ISO.DELETE + ISO.COPY + + + ISO.ATTACH + ISO.DETACH + ISO.EXTRACT + + + ISO.UPLOAD + SERVICE.OFFERING.CREATE + SERVICE.OFFERING.EDIT + + + SERVICE.OFFERING.DELETE + DISK.OFFERING.CREATE + DISK.OFFERING.EDIT + + + DISK.OFFERING.DELETE + NETWORK.OFFERING.CREATE + NETWORK.OFFERING.EDIT + + + NETWORK.OFFERING.DELETE + POD.CREATE + POD.EDIT + + + POD.DELETE + ZONE.CREATE + ZONE.EDIT + + + ZONE.DELETE + VLAN.IP.RANGE.CREATE + VLAN.IP.RANGE.DELETE + + + CONFIGURATION.VALUE.EDIT + SG.AUTH.INGRESS + + + + + - + http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/feature-overview.xml ---------------------------------------------------------------------- diff --cc docs/en-US/feature-overview.xml index f0739ca,501bca8..a05078f --- a/docs/en-US/feature-overview.xml +++ b/docs/en-US/feature-overview.xml @@@ -5,63 -5,78 +5,77 @@@ ]> -
- What Can &PRODUCT; Do? - - Multiple Hypervisor Support - - + What Can &PRODUCT; Do? + + Multiple Hypervisor Support + + - &PRODUCT; works with a variety of hypervisors. A single cloud deployment can contain multiple hypervisor implementations. You have the complete freedom to choose the right hypervisor for your workload. - - &PRODUCT; is designed to work with open source Xen and KVM hypervisors as well as - enterprise-grade hypervisors such as Citrix XenServer, VMware vSphere, and Oracle VM - (OVM). + &PRODUCT; works with a variety of hypervisors, and a single cloud deployment can contain multiple hypervisor implementations. The current release of &PRODUCT; supports pre-packaged enterprise solutions like Citrix XenServer and VMware vSphere, as well as KVM or Xen running on Ubuntu or CentOS. + - - Massively Scalable Infrastructure Management - - - &PRODUCT; can manage tens of thousands of servers installed in multiple geographically distributed datacenters. The centralized management server scales linearly, eliminating the need for intermediate cluster-level management servers. No single component failure can cause cloud-wide outage. Periodic maintenance of the management server can be performed without affecting the functioning of virtual machines running in the cloud. - - - Automatic Configuration Management - - &PRODUCT; automatically configures each guest virtual machine’s networking and storage settings. - - &PRODUCT; internally manages a pool of virtual appliances to support the cloud itself. These appliances offer services such as firewalling, routing, DHCP, VPN access, console proxy, storage access, and storage replication. The extensive use of virtual appliances simplifies the installation, configuration, and ongoing management of a cloud deployment. - - - Graphical User Interface - - &PRODUCT; offers an administrator's Web interface, used for provisioning and managing the cloud, as well as an end-user's Web interface, used for running VMs and managing VM templates. The UI can be customized to reflect the desired service provider or enterprise look and feel. - - - API and Extensibility - - + + Massively Scalable Infrastructure Management + + + &PRODUCT; can manage tens of thousands of servers installed in multiple geographically distributed datacenters. The centralized management server scales linearly, eliminating the need for intermediate cluster-level management servers. No single component failure can cause cloud-wide outage. Periodic maintenance of the management server can be performed without affecting the functioning of virtual machines running in the cloud. + + + Automatic Configuration Management + + &PRODUCT; automatically configures each guest virtual machine’s networking and storage settings. + + &PRODUCT; internally manages a pool of virtual appliances to support the cloud itself. These appliances offer services such as firewalling, routing, DHCP, VPN access, console proxy, storage access, and storage replication. The extensive use of virtual appliances simplifies the installation, configuration, and ongoing management of a cloud deployment. + + + Graphical User Interface + + &PRODUCT; offers an administrator's Web interface, used for provisioning and managing the cloud, as well as an end-user's Web interface, used for running VMs and managing VM templates. The UI can be customized to reflect the desired service provider or enterprise look and feel. + + + API and Extensibility + + - &PRODUCT; provides an API that gives programmatic access to all the management features available in the UI. The API is maintained and documented. This API enables the creation of command line tools and new user interfaces to suit particular needs. See the Developer’s Guide and API Reference, both available at http://docs.cloud.com/CloudStack_Documentation. + &PRODUCT; provides an API that gives programmatic access to all the + management features available in the UI. The API is maintained and + documented. This API enables the creation of command line tools and + new user interfaces to suit particular needs. See the Developer’s + Guide and API Reference, both available at + Apache CloudStack Guides + and + Apache CloudStack API Reference + respectively. - - + + - The &PRODUCT; pluggable allocation architecture allows the creation of new types of allocators for the selection of storage and Hosts. See the Allocator Implementation Guide (http://docs.cloudstack.org/CloudStack_Documentation/Allocator_Implementation_Guide). + The &PRODUCT; pluggable allocation architecture allows the creation + of new types of allocators for the selection of storage and Hosts. + See the Allocator Implementation Guide + (http://docs.cloudstack.org/CloudStack_Documentation/Allocator_Implementation_Guide). - - - High Availability - + + + High Availability + - &PRODUCT; has a number of features to increase the availability of the system. The Management Server itself may be deployed in a multi-node installation where the servers are load balanced. MySQL may be configured to use replication to provide for a manual failover in the event of database loss. For the hosts, &PRODUCT; supports NIC bonding and the use of separate networks for storage as well as iSCSI Multipath. + + &PRODUCT; has a number of features to increase the availability of the + system. The Management Server itself may be deployed in a multi-node + installation where the servers are load balanced. MySQL may be configured + to use replication to provide for a manual failover in the event of + database loss. For the hosts, &PRODUCT; supports NIC bonding and the use + of separate networks for storage as well as iSCSI Multipath. - +
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/guest-traffic.xml ---------------------------------------------------------------------- diff --cc docs/en-US/guest-traffic.xml index ebee698,8404968..16dfa41 --- a/docs/en-US/guest-traffic.xml +++ b/docs/en-US/guest-traffic.xml @@@ -5,33 -5,27 +5,26 @@@ ]> -
- Guest Traffic - A network can carry guest traffic only between VMs within one zone. Virtual machines in different zones cannot communicate with each other using their IP addresses; they must communicate with each other by routing through a public IP address. - The Management Server automatically creates a virtual router for each network. A virtual router is a special virtual machine that runs on the hosts. Each virtual router has three network interfaces. Its eth0 interface serves as the gateway for the guest traffic and has the IP address of 10.1.1.1. Its eth1 interface is used by the system to configure the virtual router. Its eth2 interface is assigned a public IP address for public traffic. - The virtual router provides DHCP and will automatically assign an IP address for each guest VM within the IP range assigned for the network. The user can manually reconfigure guest VMs to assume different IP addresses. - Source NAT is automatically configured in the virtual router to forward outbound traffic for all guest VMs + Guest Traffic + A network can carry guest traffic only between VMs within one zone. Virtual machines in different zones cannot communicate with each other using their IP addresses; they must communicate with each other by routing through a public IP address. - See a typical guest traffic setup given below: - - - - - guesttraffic.png: Depicts a guest traffic setup - + The Management Server automatically creates a virtual router for each network. A virtual router is a special virtual machine that runs on the hosts. Each virtual router has three network interfaces. Its eth0 interface serves as the gateway for the guest traffic and has the IP address of 10.1.1.1. Its eth1 interface is used by the system to configure the virtual router. Its eth2 interface is assigned a public IP address for public traffic. + The virtual router provides DHCP and will automatically assign an IP address for each guest VM within the IP range assigned for the network. The user can manually reconfigure guest VMs to assume different IP addresses. + Source NAT is automatically configured in the virtual router to forward outbound traffic for all guest VMs
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/hypervisor-support-for-primarystorage.xml ---------------------------------------------------------------------- diff --cc docs/en-US/hypervisor-support-for-primarystorage.xml index 23c8eb5,055c182..7c2596e --- a/docs/en-US/hypervisor-support-for-primarystorage.xml +++ b/docs/en-US/hypervisor-support-for-primarystorage.xml @@@ -5,95 -5,91 +5,88 @@@ ]> -
- Hypervisor Support for Primary Storage - The following table shows storage options and parameters for different hypervisors. - - - - - - - - - - - - - VMware vSphere - Citrix XenServer - KVM - - - - - Format for Disks, Templates, and - Snapshots - VMDK - VHD - QCOW2 - - - iSCSI support - VMFS - Clustered LVM - Yes, via Shared Mountpoint - - - Fiber Channel support - VMFS - Yes, via Existing SR - Yes, via Shared Mountpoint - - - NFS support - Y - Y - Y - - - - Local storage support - Y - Y - Y - - - - Storage over-provisioning - NFS and iSCSI - NFS - NFS - - - - - - XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes and does not support over-provisioning in the hypervisor. The storage server itself, however, can support thin-provisioning. As a result the &PRODUCT; can still support storage over-provisioning by running on thin-provisioned storage volumes. - KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file system path local to each server in a given cluster. The path must be the same across all Hosts in the cluster, for example /mnt/primary1. This shared mountpoint is assumed to be a clustered filesystem such as OCFS2. In this case the &PRODUCT; does not attempt to mount or unmount the storage as is done with NFS. The &PRODUCT; requires that the administrator insure that the storage is available + Hypervisor Support for Primary Storage + The following table shows storage options and parameters for different hypervisors. + + + + + + + + + + + VMware vSphere + Citrix XenServer + KVM - Oracle VM + + + + + Format for Disks, Templates, and + Snapshots + VMDK + VHD + QCOW2 - RAW + + + iSCSI support + VMFS + Clustered LVM + Yes, via Shared Mountpoint - Yes, via OCFS2M + + + Fiber Channel support + VMFS + Yes, via Existing SR + Yes, via Shared Mountpoint - No + + + NFS support + Y + Y + Y - Y + + + + Local storage support + Y + Y + Y - Y + + + + Storage over-provisioning + NFS and iSCSI + NFS + NFS - No + + + + + + XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes and does not support over-provisioning in the hypervisor. The storage server itself, however, can support thin-provisioning. As a result the &PRODUCT; can still support storage over-provisioning by running on thin-provisioned storage volumes. + KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file system path local to each server in a given cluster. The path must be the same across all Hosts in the cluster, for example /mnt/primary1. This shared mountpoint is assumed to be a clustered filesystem such as OCFS2. In this case the &PRODUCT; does not attempt to mount or unmount the storage as is done with NFS. The &PRODUCT; requires that the administrator insure that the storage is available - Oracle VM supports both iSCSI and NFS storage. When iSCSI is used with OVM, the &PRODUCT; administrator is responsible for setting up iSCSI on the host, including re-mounting the storage after the host recovers from a failure such as a network outage. With other hypervisors, &PRODUCT; takes care of mounting the iSCSI target on the host whenever it discovers a connection with an iSCSI server and unmounting the target when it discovers the connection is down. + - With NFS storage, &PRODUCT; manages the overprovisioning. In this case the global configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning. This is independent of hypervisor type. + With NFS storage, &PRODUCT; manages the overprovisioning. In this case the global configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning. This is independent of hypervisor type. - Local storage is an option for primary storage for vSphere, XenServer, Oracle VM, and KVM. When the local disk option is enabled, a local disk storage pool is automatically created on each host. To use local storage for the System Virtual Machines (such as the Virtual Router), set system.vm.use.local.storage to true in global configuration. + Local storage is an option for primary storage for vSphere, XenServer, and KVM. When the local disk option is enabled, a local disk storage pool is automatically created on each host. To use local storage for the System Virtual Machines (such as the Virtual Router), set system.vm.use.local.storage to true in global configuration. - &PRODUCT; supports multiple primary storage pools in a Cluster. For example, you could provision 2 NFS servers in primary storage. Or you could provision 1 iSCSI LUN initially and then add a second iSCSI LUN when the first approaches capacity. -
+ &PRODUCT; supports multiple primary storage pools in a Cluster. For example, you could provision 2 NFS servers in primary storage. Or you could provision 1 iSCSI LUN initially and then add a second iSCSI LUN when the first approaches capacity. + http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/log-in-root-admin.xml ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/manage-cloud.xml ---------------------------------------------------------------------- diff --cc docs/en-US/manage-cloud.xml index 06d4e3f,f5df2c6..d356673 --- a/docs/en-US/manage-cloud.xml +++ b/docs/en-US/manage-cloud.xml @@@ -5,28 -5,29 +5,29 @@@ ]> + - Managing the Cloud - - - + Managing the Cloud + + + - - - + + + http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/manual-live-migration.xml ---------------------------------------------------------------------- diff --cc docs/en-US/manual-live-migration.xml index 677cfc4,52de4c4..225f0ba --- a/docs/en-US/manual-live-migration.xml +++ b/docs/en-US/manual-live-migration.xml @@@ -5,47 -5,48 +5,47 @@@ ]> -
- Moving VMs Between Hosts (Manual Live Migration) - The &PRODUCT; administrator can move a running VM from one host to another without interrupting service to users or going into maintenance mode. This is called manual live migration, and can be done under the following conditions: - - The root administrator is logged in. Domain admins and users can not perform manual live migration of VMs. - The VM is running. Stopped VMs can not be live migrated. - The destination host must be in the same cluster as the original host. - The VM must not be using local disk storage. - The destination host must have enough available capacity. If not, the VM will remain in the "migrating" state until memory becomes available. + Moving VMs Between Hosts (Manual Live Migration) + The &PRODUCT; administrator can move a running VM from one host to another without interrupting service to users or going into maintenance mode. This is called manual live migration, and can be done under the following conditions: + + The root administrator is logged in. Domain admins and users can not perform manual live migration of VMs. + The VM is running. Stopped VMs can not be live migrated. + The destination host must be in the same cluster as the original host. + The VM must not be using local disk storage. + The destination host must have enough available capacity. If not, the VM will remain in the "migrating" state until memory becomes available. - (OVM) If the VM is running on the OVM hypervisor, it must not have an ISO attached. Live migration of a VM with attached ISO is not supported in OVM. + - - To manually live migrate a virtual machine - - Log in to the &PRODUCT; UI as a user or admin. - In the left navigation, click Instances. - Choose the VM that you want to migrate. - Click the Migrate Instance button - - - - Migrateinstance.png: button to migrate an instance - - - From the list of hosts, choose the one to which you want to move the VM. - Click OK. - -
+ + To manually live migrate a virtual machine + + Log in to the &PRODUCT; UI as a user or admin. + In the left navigation, click Instances. + Choose the VM that you want to migrate. + Click the Migrate Instance button. + + + + Migrateinstance.png: button to migrate an instance + + + From the list of hosts, choose the one to which you want to move the VM. + Click OK. + + http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/minimum-system-requirements.xml ---------------------------------------------------------------------- diff --cc docs/en-US/minimum-system-requirements.xml index 1bbe1e2,dcab039..0e497dd --- a/docs/en-US/minimum-system-requirements.xml +++ b/docs/en-US/minimum-system-requirements.xml @@@ -5,75 -5,66 +5,69 @@@ ]> -
- Minimum System Requirements -
- Management Server, Database, and Storage System Requirements - The machines that will run the Management Server and MySQL database must meet the following requirements. The same machines can also be used to provide primary and secondary storage, such as via localdisk or NFS. The Management Server may be placed on a virtual machine. - - Operating system: - + Minimum System Requirements +
+ Management Server, Database, and Storage System Requirements + + The machines that will run the Management Server and MySQL database must meet the following requirements. + The same machines can also be used to provide primary and secondary storage, such as via localdisk or NFS. + The Management Server may be placed on a virtual machine. + + + Operating system: + - Preferred: RHEL 6.2+ 64-bit (https://access.redhat.com/downloads) or CentOS 6.2+ 64-bit (http://isoredirect.centos.org/centos/6/isos/x86_64/). - Also supported (v3.0.3 and greater): RHEL and CentOS 5.4-5.x 64-bit - It is highly recommended that you purchase a RHEL support license. - Citrix support can not be responsible for helping fix issues with the underlying OS. + Preferred: CentOS/RHEL 6.3+ or Ubuntu 12.04(.1) - - - 64-bit x86 CPU (more cores results in better performance) - 4 GB of memory - 50 GB of local disk (When running secondary storage on the management server 500GB is recommended) - At least 1 NIC - Statically allocated IP address - Fully qualified domain name as returned by the hostname command - -
-
- Host/Hypervisor System Requirements - The host is where the cloud services run in the form of guest virtual machines. Each host is one machine that meets the following requirements: - + + + 64-bit x86 CPU (more cores results in better performance) + 4 GB of memory + 250 GB of local disk (more results in better capability; 500 GB recommended) + At least 1 NIC + Statically allocated IP address + Fully qualified domain name as returned by the hostname command + +
+
+ Host/Hypervisor System Requirements + The host is where the cloud services run in the form of guest virtual machines. Each host is one machine that meets the following requirements: + - Must be 64-bit and must support HVM (Intel-VT or AMD-V enabled). + Must support HVM (Intel-VT or AMD-V enabled). - 64-bit x86 CPU (more cores results in better performance) - Hardware virtualization support required - 4 GB of memory - 36 GB of local disk - At least 1 NIC + 64-bit x86 CPU (more cores results in better performance) + Hardware virtualization support required + 4 GB of memory + 36 GB of local disk + At least 1 NIC - Statically allocated IP Address + If DHCP is used for hosts, ensure that no conflict occurs between DHCP server used for these hosts and the DHCP router created by &PRODUCT;. - Latest hotfixes applied to hypervisor software - When you deploy &PRODUCT;, the hypervisor host must not have any VMs already running + Latest hotfixes applied to hypervisor software + When you deploy &PRODUCT;, the hypervisor host must not have any VMs already running + All hosts within a cluster must be homogenous. The CPUs must be of the same type, count, and feature flags. - - Hosts have additional requirements depending on the hypervisor. See the requirements listed at the top of the Installation section for your chosen hypervisor: + + Hosts have additional requirements depending on the hypervisor. See the requirements listed at the top of the Installation section for your chosen hypervisor: - - Citrix XenServer Installation - VMware vSphere Installation and Configuration - KVM Installation and Configuration - Oracle VM (OVM) Installation and Configuration - - - - Be sure you fulfill the additional hypervisor requirements and installation steps provided in this Guide. Hypervisor hosts must be properly prepared to work with CloudStack. For example, the requirements for XenServer are listed under Citrix XenServer Installation. - - -
- -
- + + Be sure you fulfill the additional hypervisor requirements and installation steps provided in this Guide. Hypervisor hosts must be properly prepared to work with CloudStack. For example, the requirements for XenServer are listed under Citrix XenServer Installation. + + + + + + + +
+ http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/network-offerings.xml ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/primary-storage.xml ---------------------------------------------------------------------- diff --cc docs/en-US/primary-storage.xml index de4005e,e1736a9..4ab37ef --- a/docs/en-US/primary-storage.xml +++ b/docs/en-US/primary-storage.xml @@@ -23,12 -23,12 +23,12 @@@ -->
- Primary Storage + Primary Storage - This section gives concepts and technical details about &PRODUCT; primary storage. For information about how to install and configure primary storage through the &PRODUCT; UI, see the Advanced Installation Guide. - - - - - + This section gives concepts and technical details about &PRODUCT; primary storage. For information about how to install and configure primary storage through the &PRODUCT; UI, see the Installation Guide. + + + + + +
- http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/secondary-storage.xml ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/security-groups.xml ---------------------------------------------------------------------- diff --cc docs/en-US/security-groups.xml index 07b9f79,3c08965..b6eecc3 --- a/docs/en-US/security-groups.xml +++ b/docs/en-US/security-groups.xml @@@ -23,14 -23,9 +23,15 @@@ -->
+<<<<<<< HEAD + Using Security Groups to Control Traffic to VMs + +======= Security Groups + +>>>>>>> master +
http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/storage.xml ---------------------------------------------------------------------- diff --cc docs/en-US/storage.xml index 49ebed9,86d3f53..580fe59 --- a/docs/en-US/storage.xml +++ b/docs/en-US/storage.xml @@@ -23,11 -23,10 +23,10 @@@ --> - Working With Storage - - - - - + Working With Storage + + + + + - http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/system-reserved-ip-addresses.xml ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/time-zones.xml ---------------------------------------------------------------------- diff --cc docs/en-US/time-zones.xml index c187ad3,aa8eefb..6b3b64e --- a/docs/en-US/time-zones.xml +++ b/docs/en-US/time-zones.xml @@@ -23,116 -23,115 +23,115 @@@ --> - Time Zones + Time Zones - The following time zone identifiers are accepted by &PRODUCT;. There are several places that have a time zone as a required or optional parameter. These include scheduling recurring snapshots, creating a user, and specifying the usage time zone in the Configuration table. . + The following time zone identifiers are accepted by &PRODUCT;. There are several places that have a time zone as a required or optional parameter. These include scheduling recurring snapshots, creating a user, and specifying the usage time zone in the Configuration table. - - - - - - - - Etc/GMT+12 - Etc/GMT+11 - Pacific/Samoa - - - Pacific/Honolulu - US/Alaska - America/Los_Angeles - - - Mexico/BajaNorte - US/Arizona - US/Mountain - - - America/Chihuahua - America/Chicago - America/Costa_Rica - - - America/Mexico_City - Canada/Saskatchewan - America/Bogota - - - America/New_York - America/Caracas - America/Asuncion - - - America/Cuiaba - America/Halifax - America/La_Paz - - - America/Santiago - America/St_Johns - America/Araguaina - - - America/Argentina/Buenos_Aires - America/Cayenne - America/Godthab - - - America/Montevideo - Etc/GMT+2 - Atlantic/Azores - - - Atlantic/Cape_Verde - Africa/Casablanca - Etc/UTC - - - Atlantic/Reykjavik - Europe/London - CET - - - Europe/Bucharest - Africa/Johannesburg - Asia/Beirut - - - Africa/Cairo - Asia/Jerusalem - Europe/Minsk - - - Europe/Moscow - Africa/Nairobi - Asia/Karachi - - - Asia/Kolkata - Asia/Bangkok - Asia/Shanghai - - - Asia/Kuala_Lumpur - Australia/Perth - Asia/Taipei - - - Asia/Tokyo - Asia/Seoul - Australia/Adelaide - - - Australia/Darwin - Australia/Brisbane - Australia/Canberra - - - Pacific/Guam - Pacific/Auckland - - - - - + + + + + + + + Etc/GMT+12 + Etc/GMT+11 + Pacific/Samoa + + + Pacific/Honolulu + US/Alaska + America/Los_Angeles + + + Mexico/BajaNorte + US/Arizona + US/Mountain + + + America/Chihuahua + America/Chicago + America/Costa_Rica + + + America/Mexico_City + Canada/Saskatchewan + America/Bogota + + + America/New_York + America/Caracas + America/Asuncion + + + America/Cuiaba + America/Halifax + America/La_Paz + + + America/Santiago + America/St_Johns + America/Araguaina + + + America/Argentina/Buenos_Aires + America/Cayenne + America/Godthab + + + America/Montevideo + Etc/GMT+2 + Atlantic/Azores + + + Atlantic/Cape_Verde + Africa/Casablanca + Etc/UTC + + + Atlantic/Reykjavik + Europe/London + CET + + + Europe/Bucharest + Africa/Johannesburg + Asia/Beirut + + + Africa/Cairo + Asia/Jerusalem + Europe/Minsk + + + Europe/Moscow + Africa/Nairobi + Asia/Karachi + + + Asia/Kolkata + Asia/Bangkok + Asia/Shanghai + + + Asia/Kuala_Lumpur + Australia/Perth + Asia/Taipei + + + Asia/Tokyo + Asia/Seoul + Australia/Adelaide + + + Australia/Darwin + Australia/Brisbane + Australia/Canberra + + + Pacific/Guam + Pacific/Auckland + + + + + - - + http://git-wip-us.apache.org/repos/asf/incubator-cloudstack/blob/30f2565d/docs/en-US/troubleshooting.xml ---------------------------------------------------------------------- diff --cc docs/en-US/troubleshooting.xml index 1fe0347,24ecab5..570d02e --- a/docs/en-US/troubleshooting.xml +++ b/docs/en-US/troubleshooting.xml @@@ -5,30 -5,31 +5,31 @@@ ]> + - Troubleshooting - - + Troubleshooting + + - - - + + + - - - + + +