Return-Path: X-Original-To: apmail-cloudstack-commits-archive@www.apache.org Delivered-To: apmail-cloudstack-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B77AA10CAF for ; Wed, 2 Oct 2013 19:13:37 +0000 (UTC) Received: (qmail 90958 invoked by uid 500); 2 Oct 2013 19:13:16 -0000 Delivered-To: apmail-cloudstack-commits-archive@cloudstack.apache.org Received: (qmail 90799 invoked by uid 500); 2 Oct 2013 19:13:13 -0000 Mailing-List: contact commits-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list commits@cloudstack.apache.org Received: (qmail 90186 invoked by uid 99); 2 Oct 2013 19:13:08 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 02 Oct 2013 19:13:08 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 356748AE93E; Wed, 2 Oct 2013 19:13:08 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: ke4qqq@apache.org To: commits@cloudstack.apache.org Date: Wed, 02 Oct 2013 19:13:47 -0000 Message-Id: In-Reply-To: <472895c4c9334c88a0a32d02e055bf71@git.apache.org> References: <472895c4c9334c88a0a32d02e055bf71@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [42/51] [partial] Removing docs from 4.2 as they are now in their own repo http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/basic-zone-network-traffic-types.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/basic-zone-network-traffic-types.xml b/docs/en-US/basic-zone-network-traffic-types.xml deleted file mode 100644 index 8503736..0000000 --- a/docs/en-US/basic-zone-network-traffic-types.xml +++ /dev/null @@ -1,35 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - -
- Basic Zone Network Traffic Types - When basic networking is used, there can be only one physical network in the zone. That physical network carries the following traffic types: - - Guest. When end users run VMs, they generate guest traffic. The guest VMs communicate with each other over a network that can be referred to as the guest network. Each pod in a basic zone is a broadcast domain, and therefore each pod has a different IP range for the guest network. The administrator must configure the IP range for each pod. - Management. When &PRODUCT;'s internal resources communicate with each other, they generate management traffic. This includes communication between hosts, system VMs (VMs used by &PRODUCT; to perform various tasks in the cloud), and any other component that communicates directly with the &PRODUCT; Management Server. You must configure the IP range for the system VMs to use. - We strongly recommend the use of separate NICs for management traffic and guest traffic. - Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the &PRODUCT; UI to acquire these IPs to implement NAT between their guest network and the public network, as described in Acquiring a New IP Address. - Storage. While labeled "storage" this is specifically about secondary storage, and doesn't affect traffic for primary storage. This includes traffic such as VM templates and snapshots, which is sent between the secondary storage VM and secondary storage servers. &PRODUCT; uses a separate Network Interface Controller (NIC) named storage NIC for storage network traffic. Use of a storage NIC that always operates on a high bandwidth network allows fast template and snapshot copying. You must configure the IP range to use for the storage network. - - In a basic network, configuring the physical network is fairly straightforward. In most cases, you only need to configure one guest network to carry traffic that is generated by guest VMs. If you use a NetScaler load balancer and enable its elastic IP and elastic load balancing (EIP and ELB) features, you must also configure a network to carry public traffic. &PRODUCT; takes care of presenting the necessary network configuration steps to you in the UI when you add a new zone. -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/basic-zone-physical-network-configuration.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/basic-zone-physical-network-configuration.xml b/docs/en-US/basic-zone-physical-network-configuration.xml deleted file mode 100644 index 4b1d24f..0000000 --- a/docs/en-US/basic-zone-physical-network-configuration.xml +++ /dev/null @@ -1,28 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - -
- Basic Zone Physical Network Configuration - In a basic network, configuring the physical network is fairly straightforward. You only need to configure one guest network to carry traffic that is generated by guest VMs. When you first add a zone to &PRODUCT;, you set up the guest network through the Add Zone screens. - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/best-practices-for-vms.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/best-practices-for-vms.xml b/docs/en-US/best-practices-for-vms.xml deleted file mode 100644 index f2656a0..0000000 --- a/docs/en-US/best-practices-for-vms.xml +++ /dev/null @@ -1,67 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - -
- Best Practices for Virtual Machines - For VMs to work as expected and provide excellent service, follow these guidelines. -
- Monitor VMs for Max Capacity - The &PRODUCT; administrator should monitor the total number of VM instances in each - cluster, and disable allocation to the cluster if the total is approaching the maximum that - the hypervisor can handle. Be sure to leave a safety margin to allow for the possibility of - one or more hosts failing, which would increase the VM load on the other hosts as the VMs - are automatically redeployed. Consult the documentation for your chosen hypervisor to find - the maximum permitted number of VMs per host, then use &PRODUCT; global configuration - settings to set this as the default limit. Monitor the VM activity in each cluster at all - times. Keep the total number of VMs below a safe level that allows for the occasional host - failure. For example, if there are N hosts in the cluster, and you want to allow for one - host in the cluster to be down at any given time, the total number of VM instances you can - permit in the cluster is at most (N-1) * (per-host-limit). Once a cluster reaches this - number of VMs, use the &PRODUCT; UI to disable allocation of more VMs to the - cluster. -
-
- Install Required Tools and Drivers - Be sure the following are installed on each VM: - - For XenServer, install PV drivers and Xen tools on each VM. - This will enable live migration and clean guest shutdown. - Xen tools are required in order for dynamic CPU and RAM scaling to work. - For vSphere, install VMware Tools on each VM. - This will enable console view to work properly. - VMware Tools are required in order for dynamic CPU and RAM scaling to work. - - To be sure that Xen tools or VMware Tools is installed, use one of the following techniques: - - Create each VM from a template that already has the tools installed; or, - When registering a new template, the administrator or user can indicate whether tools are - installed on the template. This can be done through the UI - or using the updateTemplate API; or, - If a user deploys a virtual machine with a template that does not have - Xen tools or VMware Tools, and later installs the tools on the VM, - then the user can inform &PRODUCT; using the updateVirtualMachine API. - After installing the tools and updating the virtual machine, stop - and start the VM. - -
-
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/best-practices-primary-storage.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/best-practices-primary-storage.xml b/docs/en-US/best-practices-primary-storage.xml deleted file mode 100644 index 279b95c..0000000 --- a/docs/en-US/best-practices-primary-storage.xml +++ /dev/null @@ -1,33 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Best Practices for Primary Storage - - The speed of primary storage will impact guest performance. If possible, choose smaller, higher RPM drives or SSDs for primary storage. - There are two ways CloudStack can leverage primary storage: - Static: This is CloudStack's traditional way of handling storage. In this model, a preallocated amount of storage (ex. a volume from a SAN) is given to CloudStack. CloudStack then permits many of its volumes to be created on this storage (can be root and/or data disks). If using this technique, ensure that nothing is stored on the storage. Adding the storage to &PRODUCT; will destroy any existing data. - Dynamic: This is a newer way for CloudStack to manage storage. In this model, a storage system (rather than a preallocated amount of storage) is given to CloudStack. CloudStack, working in concert with a storage plug-in, dynamically creates volumes on the storage system and each volume on the storage system maps to a single CloudStack volume. This is highly useful for features such as storage Quality of Service. Currently this feature is supported for data disks (Disk Offerings). - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/best-practices-secondary-storage.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/best-practices-secondary-storage.xml b/docs/en-US/best-practices-secondary-storage.xml deleted file mode 100644 index 3d535c3..0000000 --- a/docs/en-US/best-practices-secondary-storage.xml +++ /dev/null @@ -1,32 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Best Practices for Secondary Storage - - Each Zone can have one or more secondary storage servers. Multiple secondary storage servers provide increased scalability to the system. - Secondary storage has a high read:write ratio and is expected to consist of larger drives with lower IOPS than primary storage. - Ensure that nothing is stored on the server. Adding the server to &PRODUCT; will destroy any existing data. - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/best-practices-templates.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/best-practices-templates.xml b/docs/en-US/best-practices-templates.xml deleted file mode 100644 index 4e2992c..0000000 --- a/docs/en-US/best-practices-templates.xml +++ /dev/null @@ -1,28 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Best Practices for Templates - If you plan to use large templates (100 GB or larger), be sure you have a 10-gigabit network to support the large templates. A slower network can lead to timeouts and other errors when large templates are used. -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/best-practices-virtual-router.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/best-practices-virtual-router.xml b/docs/en-US/best-practices-virtual-router.xml deleted file mode 100644 index 060d868..0000000 --- a/docs/en-US/best-practices-virtual-router.xml +++ /dev/null @@ -1,34 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Best Practices for Virtual Routers - - WARNING: Restarting a virtual router from a hypervisor console deletes all the iptables rules. To work around this issue, stop the virtual router and start it from the &PRODUCT; UI. - WARNING: Do not use the destroyRouter API when only one router is available in the network, because restartNetwork API with the cleanup=false parameter can't recreate it later. If you want to destroy and recreate the single router available in the network, use the restartNetwork API with the cleanup=true parameter. - - - - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/best-practices.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/best-practices.xml b/docs/en-US/best-practices.xml deleted file mode 100644 index 41d7cde..0000000 --- a/docs/en-US/best-practices.xml +++ /dev/null @@ -1,82 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - - - Best Practices - Deploying a cloud is challenging. There are many different technology choices to make, and &PRODUCT; is flexible enough in its configuration that there are many possible ways to combine and configure the chosen technology. This section contains suggestions and requirements about cloud deployments. - These should be treated as suggestions and not absolutes. However, we do encourage anyone planning to build a cloud outside of these guidelines to seek guidance and advice on the project mailing lists. -
- Process Best Practices - - - A staging system that models the production environment is strongly advised. It is critical if customizations have been applied to &PRODUCT;. - - - Allow adequate time for installation, a beta, and learning the system. Installs with basic networking can be done in hours. Installs with advanced networking usually take several days for the first attempt, with complicated installations taking longer. For a full production system, allow at least 4-8 weeks for a beta to work through all of the integration issues. You can get help from fellow users on the cloudstack-users mailing list. - - -
-
- Setup Best Practices - - - Each host should be configured to accept connections only from well-known entities such as the &PRODUCT; Management Server or your network monitoring software. - - - Use multiple clusters per pod if you need to achieve a certain switch density. - - - Primary storage mountpoints or LUNs should not exceed 6 TB in size. It is better to have multiple smaller primary storage elements per cluster than one large one. - - - When exporting shares on primary storage, avoid data loss by restricting the range of IP addresses that can access the storage. See "Linux NFS on Local Disks and DAS" or "Linux NFS on iSCSI". - - - NIC bonding is straightforward to implement and provides increased reliability. - - - 10G networks are generally recommended for storage access when larger servers that can support relatively more VMs are used. - - - Host capacity should generally be modeled in terms of RAM for the guests. Storage and CPU may be overprovisioned. RAM may not. RAM is usually the limiting factor in capacity designs. - - - (XenServer) Configure the XenServer dom0 settings to allocate more memory to dom0. This can enable XenServer to handle larger numbers of virtual machines. We recommend 2940 MB of RAM for XenServer dom0. For instructions on how to do this, see http://support.citrix.com/article/CTX126531. The article refers to XenServer 5.6, but the same information applies to XenServer 6.0. - - -
-
- Maintenance Best Practices - - - Monitor host disk space. Many host failures occur because the host's root disk fills up from logs that were not rotated adequately. - - - Monitor the total number of VM instances in each cluster, and disable allocation to the cluster if the total is approaching the maximum that the hypervisor can handle. Be sure to leave a safety margin to allow for the possibility of one or more hosts failing, which would increase the VM load on the other hosts as the VMs are redeployed. Consult the documentation for your chosen hypervisor to find the maximum permitted number of VMs per host, then use &PRODUCT; global configuration settings to set this as the default limit. Monitor the VM activity in each cluster and keep the total number of VMs below a safe level that allows for the occasional host failure. For example, if there are N hosts in the cluster, and you want to allow for one host in the cluster to be down at any given time, the total number of VM instances you can permit in the cluster is at most (N-1) * (per-host-limit). Once a cluster reaches this number of VMs, use the &PRODUCT; UI to disable allocation t o the cluster. - - - The lack of up-do-date hotfixes can lead to data corruption and lost VMs. - Be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendor’s support channel, and apply patches as soon as possible after they are released. &PRODUCT; will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches. -
-
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/build-deb.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/build-deb.xml b/docs/en-US/build-deb.xml deleted file mode 100644 index dca31d2..0000000 --- a/docs/en-US/build-deb.xml +++ /dev/null @@ -1,123 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Building DEB packages - - In addition to the bootstrap dependencies, you'll also need to install - several other dependencies. Note that we recommend using Maven 3, which - is not currently available in 12.04.1 LTS. So, you'll also need to add a - PPA repository that includes Maven 3. After running the command - add-apt-repository, you will be prompted to continue and - a GPG key will be added. - - -$ sudo apt-get update -$ sudo apt-get install python-software-properties -$ sudo add-apt-repository ppa:natecarlson/maven3 -$ sudo apt-get update -$ sudo apt-get install ant debhelper openjdk-6-jdk tomcat6 libws-commons-util-java genisoimage python-mysqldb libcommons-codec-java libcommons-httpclient-java liblog4j1.2-java maven3 - - - While we have defined, and you have presumably already installed the - bootstrap prerequisites, there are a number of build time prerequisites - that need to be resolved. &PRODUCT; uses maven for dependency resolution. - You can resolve the buildtime depdencies for CloudStack by running: - -$ mvn3 -P deps - - Now that we have resolved the dependencies we can move on to building &PRODUCT; - and packaging them into DEBs by issuing the following command. - - -$ dpkg-buildpackage -uc -us - - - - This command will build 16 Debian packages. You should have all of the following: - - -cloud-agent_4.0.0-incubating_amd64.deb -cloud-agent-deps_4.0.0-incubating_amd64.deb -cloud-agent-libs_4.0.0-incubating_amd64.deb -cloud-awsapi_4.0.0-incubating_amd64.deb -cloud-cli_4.0.0-incubating_amd64.deb -cloud-client_4.0.0-incubating_amd64.deb -cloud-client-ui_4.0.0-incubating_amd64.deb -cloud-core_4.0.0-incubating_amd64.deb -cloud-deps_4.0.0-incubating_amd64.deb -cloud-python_4.0.0-incubating_amd64.deb -cloud-scripts_4.0.0-incubating_amd64.deb -cloud-server_4.0.0-incubating_amd64.deb -cloud-setup_4.0.0-incubating_amd64.deb -cloud-system-iso_4.0.0-incubating_amd64.deb -cloud-usage_4.0.0-incubating_amd64.deb -cloud-utils_4.0.0-incubating_amd64.deb - - -
- Setting up an APT repo - - After you've created the packages, you'll want to copy them to a system where you can serve the packages over HTTP. You'll create a directory for the packages and then use dpkg-scanpackages to create Packages.gz, which holds information about the archive structure. Finally, you'll add the repository to your system(s) so you can install the packages using APT. - - The first step is to make sure that you have the dpkg-dev package installed. This should have been installed when you pulled in the debhelper application previously, but if you're generating Packages.gz on a different system, be sure that it's installed there as well. - -$ sudo apt-get install dpkg-dev - -The next step is to copy the DEBs to the directory where they can be served over HTTP. We'll use /var/www/cloudstack/repo in the examples, but change the directory to whatever works for you. - - -sudo mkdir -p /var/www/cloudstack/repo/binary -sudo cp *.deb /var/www/cloudstack/repo/binary -sudo cd /var/www/cloudstack/repo/binary -sudo dpkg-scanpackages . /dev/null | tee Packages | gzip -9 > Packages.gz - - -Note: Override Files - You can safely ignore the warning about a missing override file. - - -Now you should have all of the DEB packages and Packages.gz in the binary directory and available over HTTP. (You may want to use wget or curl to test this before moving on to the next step.) - -
-
- Configuring your machines to use the APT repository - - Now that we have created the repository, you need to configure your machine - to make use of the APT repository. You can do this by adding a repository file - under /etc/apt/sources.list.d. Use your preferred editor to - create /etc/apt/sources.list.d/cloudstack.list with this - line: - - deb http://server.url/cloudstack/repo binary ./ - - Now that you have the repository info in place, you'll want to run another - update so that APT knows where to find the &PRODUCT; packages. - -$ sudo apt-get update - -You can now move on to the instructions under Install on Ubuntu. - -
-
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/build-nonoss.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/build-nonoss.xml b/docs/en-US/build-nonoss.xml deleted file mode 100644 index fceca60..0000000 --- a/docs/en-US/build-nonoss.xml +++ /dev/null @@ -1,49 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Building Non-OSS - If you need support for the VMware, NetApp, F5, NetScaler, SRX, or any other non-Open Source Software (nonoss) plugins, you'll need to download a few components on your own and follow a slightly different procedure to build from source. - Why Non-OSS? - Some of the plugins supported by &PRODUCT; cannot be distributed with &PRODUCT; for licensing reasons. In some cases, some of the required libraries/JARs are under a proprietary license. In other cases, the required libraries may be under a license that's not compatible with Apache's licensing guidelines for third-party products. - - - - To build the Non-OSS plugins, you'll need to have the requisite JARs installed under the deps directory. - Because these modules require dependencies that can't be distributed with &PRODUCT; you'll need to download them yourself. Links to the most recent dependencies are listed on the How to build on master branch page on the wiki. - - You may also need to download vhd-util, which was removed due to licensing issues. You'll copy vhd-util to the scripts/vm/hypervisor/xenserver/ directory. - - - Once you have all the dependencies copied over, you'll be able to build &PRODUCT; with the nonoss option: - - $ mvn clean - $ mvn install -Dnonoss - - - - Once you've built &PRODUCT; with the nonoss profile, you can package it using the or instructions. - - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/build-rpm.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/build-rpm.xml b/docs/en-US/build-rpm.xml deleted file mode 100644 index 100a06f..0000000 --- a/docs/en-US/build-rpm.xml +++ /dev/null @@ -1,87 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Building RPMs from Source - As mentioned previously in , you will need to install several prerequisites before you can build packages for &PRODUCT;. Here we'll assume you're working with a 64-bit build of CentOS or Red Hat Enterprise Linux. - # yum groupinstall "Development Tools" - # yum install java-1.6.0-openjdk-devel.x86_64 genisoimage mysql mysql-server ws-commons-util MySQL-python tomcat6 createrepo - Next, you'll need to install build-time dependencies for CloudStack with - Maven. We're using Maven 3, so you'll want to - grab a Maven 3 tarball - and uncompress it in your home directory (or whatever location you prefer): - $ tar zxvf apache-maven-3.0.4-bin.tar.gz - $ export PATH=/usr/local/apache-maven-3.0.4//bin:$PATH - Maven also needs to know where Java is, and expects the JAVA_HOME environment - variable to be set: - $ export JAVA_HOME=/usr/lib/jvm/jre-1.6.0-openjdk.x86_64/ - Verify that Maven is installed correctly: - $ mvn --version - You probably want to ensure that your environment variables will survive a logout/reboot. - Be sure to update ~/.bashrc with the PATH and JAVA_HOME variables. - - Building RPMs for &PRODUCT; is fairly simple. Assuming you already have the source downloaded and have uncompressed the tarball into a local directory, you're going to be able to generate packages in just a few minutes. - Packaging has Changed - If you've created packages for &PRODUCT; previously, you should be aware that the process has changed considerably since the project has moved to using Apache Maven. Please be sure to follow the steps in this section closely. - -
- Generating RPMS - Now that we have the prerequisites and source, you will cd to the packaging/centos63/ directory. - $ cd packaging/centos63 - Generating RPMs is done using the package.sh script: - $./package.sh - - That will run for a bit and then place the finished packages in dist/rpmbuild/RPMS/x86_64/. - You should see seven RPMs in that directory: cloudstack-agent-4.1.0-SNAPSHOT.el6.x86_64.rpm, cloudstack-awsapi-4.1.0-SNAPSHOT.el6.x86_64.rpm, cloudstack-cli-4.1.0-SNAPSHOT.el6.x86_64.rpm, cloudstack-common-4.1.0-SNAPSHOT.el6.x86_64.rpm, cloudstack-docs-4.1.0-SNAPSHOT.el6.x86_64.rpm, cloudstack-management-4.1.0-SNAPSHOT.el6.x86_64.rpm, and cloudstack-usage-4.1.0-SNAPSHOT.el6.x86_64.rpm. -
- Creating a yum repo - - While RPMs is a useful packaging format - it's most easily consumed from Yum repositories over a network. The next step is to create a Yum Repo with the finished packages: - $ mkdir -p ~/tmp/repo - $ cp dist/rpmbuild/RPMS/x86_64/*rpm ~/tmp/repo/ - $ createrepo ~/tmp/repo - - - The files and directories within ~/tmp/repo can now be uploaded to a web server and serve as a yum repository. - -
-
- Configuring your systems to use your new yum repository - - Now that your yum repository is populated with RPMs and metadata - we need to configure the machines that need to install &PRODUCT;. - Create a file named /etc/yum.repos.d/cloudstack.repo with this information: - - [apache-cloudstack] - name=Apache CloudStack - baseurl=http://webserver.tld/path/to/repo - enabled=1 - gpgcheck=0 - - - Completing this step will allow you to easily install &PRODUCT; on a number of machines across the network. - -
-
-
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/building-devcloud.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/building-devcloud.xml b/docs/en-US/building-devcloud.xml deleted file mode 100644 index f3c4d19..0000000 --- a/docs/en-US/building-devcloud.xml +++ /dev/null @@ -1,32 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Building DevCloud - The DevCloud appliance can be downloaded from the wiki at . It can also be built from scratch. Code is being developed to provide this alternative build. It is based on veewee, Vagrant and Puppet. - The goal is to automate the DevCloud build and make this automation capability available to all within the source release of &PRODUCT; - This is under heavy development. The code is located in the source tree under tools/devcloud - A preliminary wiki page describes the build at https://cwiki.apache.org/confluence/display/CLOUDSTACK/Building+DevCloud - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/building-documentation.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/building-documentation.xml b/docs/en-US/building-documentation.xml deleted file mode 100644 index 8ee63b0..0000000 --- a/docs/en-US/building-documentation.xml +++ /dev/null @@ -1,40 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Building &PRODUCT; Documentation - To build a specific guide, go to the source tree of the documentation in /docs and identify the guide you want to build. - Currently there are four guides plus the release notes, all defined in publican configuration files: - - publican-adminguide.cfg - publican-devguide.cfg - publican-installation.cfg - publican-plugin-niciranvp.cfg - publican-release-notes.cfg - - To build the Developer guide for example, do the following: - publican build --config=publican-devguide.cfg --formats=pdf --langs=en-US - A pdf file will be created in tmp/en-US/pdf, you may choose to build the guide in a different format like html. In that case just replace the format value. - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/building-marvin.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/building-marvin.xml b/docs/en-US/building-marvin.xml deleted file mode 100644 index e33c4cb..0000000 --- a/docs/en-US/building-marvin.xml +++ /dev/null @@ -1,46 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Building and Installing Marvin - Marvin is built with Maven and is dependent on APIdoc. To build it do the following in the root tree of &PRODUCT;: - mvn -P developer -pl :cloud-apidoc - mvn -P developer -pl :cloud-marvin - If successful the build will have created the cloudstackAPI Python package under tools/marvin/marvin/cloudstackAPI as well as a gziped Marvin package under tools/marvin dist. To install the Python Marvin module do the following in tools/marvin: - sudo python ./setup.py install - The dependencies will be downloaded the Python module installed and you should be able to use Marvin in Python. Check that you can import the module before starting to use it. - $ python -Python 2.7.3 (default, Nov 17 2012, 19:54:34) -[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin -Type "help", "copyright", "credits" or "license" for more information. ->>> import marvin ->>> from marvin.cloudstackAPI import * ->>> - - You could also install it using pip or easy_install using the local distribution package in tools/marvin/dist : - pip install tools/marvin/dist/Marvin-0.1.0.tar.gz - Or: - easy_install tools/marvin/dist/Marvin-0.1.0.tar.gz - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/building-prerequisites.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/building-prerequisites.xml b/docs/en-US/building-prerequisites.xml deleted file mode 100644 index d97ca40..0000000 --- a/docs/en-US/building-prerequisites.xml +++ /dev/null @@ -1,66 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - - -
- Build Procedure Prerequisites - In this section we will assume that you are using the Ubuntu Linux distribution with the Advanced Packaging Tool (APT). If you are using a different distribution or OS and a different packaging tool, adapt the following instructions to your environment. To build &PRODUCT; you will need: - - - git, http://git-scm.com - sudo apt-get install git-core - - - maven, http://maven.apache.org - sudo apt-get install maven - Make sure that you installed maven 3 - $ mvn --version -Apache Maven 3.0.4 -Maven home: /usr/share/maven -Java version: 1.6.0_24, vendor: Sun Microsystems Inc. -Java home: /usr/lib/jvm/java-6-openjdk-amd64/jre -Default locale: en_US, platform encoding: UTF-8 -OS name: "linux", version: "3.2.0-33-generic", arch: "amd64", family: "unix" - - - java - set the JAVA_HOME environment variable - $ export JAVA_HOME=/usr/lib/jvm/java-6-openjdk - - - - In addition, to deploy and run &PRODUCT; in a development environment you will need: - - - Mysql - sudo apt-get install mysql-server-5.5 - Start the mysqld service and create a cloud user with cloud as a password - - - Tomcat 6 - sudo apt-get install tomcat6 - - - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/building-translation.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/building-translation.xml b/docs/en-US/building-translation.xml deleted file mode 100644 index dd66365..0000000 --- a/docs/en-US/building-translation.xml +++ /dev/null @@ -1,75 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Translating &PRODUCT; Documentation - Now that you know how to build the documentation with Publican, let's move on to building it in different languages. Publican helps us - build the documentation in various languages by using Portable Object Template (POT) files and Portable Objects (PO) files for each language. - - The POT files are generated by parsing all the DocBook files in the language of origin, en-US for us, and creating a long list of strings - for each file that needs to be translated. The translation can be done by hand directly in the PO files of each target language or via the - transifex service. - - - Transifex is a free service to help translate documents and organize distributed teams - of translators. Anyone interested in helping with the translation should get an account on Transifex - - - Three &PRODUCT; projects exist on Transifex. It is recommended to tour those projects to become familiar with Transifex: - - https://www.transifex.com/projects/p/ACS_DOCS/ - https://www.transifex.com/projects/p/ACS_Runbook/ - https://www.transifex.com/projects/p/CloudStackUI/ - - - - - The pot directory should already exist in the source tree. If you want to build an up to date translation, you might have to update it to include any pot file that was not previously generated. - To register new resources on transifex, you will need to be an admin of the transifex &PRODUCT; site. Send an email to the developer list if you want access. - - First we need to generate the .pot files for all the DocBook xml files needed for a particular guide. This is well explained at the publican website in a section on - how to prepare a document for translation. - The basic command to execute to build the pot files for the developer guide is: - publican update_pot --config=publican-devguide.cfg - This will create a pot directory with pot files in it, one for each corresponding xml files needed to build the guide. Once generated, all pots files need to be configured for translation using transifex this is best done by using the transifex client that you can install with the following command (For RHEL and its derivatives): - yum install transifex-client - The transifex client is also available via PyPi and you can install it like this: - easy_install transifex-client - Once you have installed the transifex client you can run the settx.sh script in the docs directory. This will create the .tx/config file used by transifex to push and pull all translation strings. - All the resource files need to be uploaded to transifex, this is done with the transifex client like so: - tx push -s - Once the translators have completed translation of the documentation, the translated strings can be pulled from transifex like so: - tx pull -a - If you wish to push specific resource files or pull specific languages translation strings, you can do so with the transifex client. A complete documentation of - the client is available on the client website - When you pull new translation strings a directory will be created corresponding to the language of the translation. This directory will contain PO files that will be used by Publican to create the documentation in that specific language. For example assuming that you pull the French translation whose language code is fr-FR, you will build the documentation with publican: - publican build --config=publican-devguide.cfg --formats=html --langs=fr-FR - - - Some languages like Chinese or Japanese will not render well in pdf format and html should be used. - - - - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/building-with-maven-deploy.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/building-with-maven-deploy.xml b/docs/en-US/building-with-maven-deploy.xml deleted file mode 100644 index e4b9801..0000000 --- a/docs/en-US/building-with-maven-deploy.xml +++ /dev/null @@ -1,39 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Deployment and Testing Steps - Deploying the &PRODUCT; code that you compiled is a two step process: - - If you have not configured the database or modified its properties do: - mvn -P developer -pl developer -Ddeploydb - - Then you need to run the &PRODUCT; management server. To attach a debugger to it, do: - export MAVEN_OPTS="-Xmx1024 -Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n" - mvn -pl :cloud-client-ui jetty:run - - - When dealing with the database, remember that you may wipe it entirely and lose any data center configuration that you may have set previously. -
- http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/building-with-maven-steps.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/building-with-maven-steps.xml b/docs/en-US/building-with-maven-steps.xml deleted file mode 100644 index 1c15bfa..0000000 --- a/docs/en-US/building-with-maven-steps.xml +++ /dev/null @@ -1,33 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Building Steps - &PRODUCT; uses git for source version control, first make sure you have the source code by pulling it: - git clone https://git-wip-us.apache.org/repos/asf/cloudstack.git - Several Project Object Models (POM) are defined to deal with the various build targets of &PRODUCT;. Certain features require some packages that are not compatible with the Apache license and therefore need to be downloaded on your own. Check the wiki for additional information https://cwiki.apache.org/CLOUDSTACK/building-with-maven.html. In order to build all the open source targets of &PRODUCT; do: - mvn clean install - The resulting jar files will be in the target directory of the subdirectory of the compiled module. -
- http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/building-with-maven.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/building-with-maven.xml b/docs/en-US/building-with-maven.xml deleted file mode 100644 index 5363b1d..0000000 --- a/docs/en-US/building-with-maven.xml +++ /dev/null @@ -1,32 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - - - Using Maven to Build &PRODUCT; - - - - - - http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/castor-with-cs.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/castor-with-cs.xml b/docs/en-US/castor-with-cs.xml deleted file mode 100644 index 7bf676b..0000000 --- a/docs/en-US/castor-with-cs.xml +++ /dev/null @@ -1,86 +0,0 @@ - - -%BOOK_ENTITIES; -]> - -
- Using the CAStor Back-end Storage with &PRODUCT; - This section describes how to use a CAStor cluster as the back-end storage system for a - &PRODUCT; S3 front-end. The CAStor back-end storage for &PRODUCT; extends the existing storage - classes and allows the storage configuration attribute to point to a CAStor cluster. - This feature makes use of the &PRODUCT; server's local disk to spool files before writing - them to CAStor when handling the PUT operations. However, a file must be successfully written - into the CAStor cluster prior to the return of a success code to the S3 client to ensure that - the transaction outcome is correctly reported. - - The S3 multipart file upload is not supported in this release. You are prompted with - proper error message if a multipart upload is attempted. - - To configure CAStor: - - - Install &PRODUCT; by following the instructions given in the INSTALL.txt file. - - You can use the S3 storage system in &PRODUCT; without setting up and installing the - compute components. - - - - Enable the S3 API by setting "enable.s3.api = true" in the Global parameter section in - the UI and register a user. - For more information, see S3 API in - &PRODUCT;. - - - Edit the cloud-bridge.properties file and modify the "storage.root" parameter. - - - Set "storage.root" to the key word "castor". - - - Specify a CAStor tenant domain to which content is written. If the domain is not - specified, the CAStor default domain, specified by the "cluster" parameter in CAStor's - node.cfg file, will be used. - - - Specify a list of node IP addresses, or set "zeroconf" and the cluster - name. When using a static IP list with a large cluster, it is not necessary to include - every node, only a few is required to initialize the client software. - For example: - storage.root=castor domain=cloudstack 10.1.1.51 10.1.1.52 10.1.1.53 - In this example, the configuration file directs &PRODUCT; to write the S3 files to - CAStor instead of to a file system, where the CAStor domain name is cloudstack, and the - CAStor node IP addresses are those listed. - - - (Optional) The last value is a port number on which to communicate with the CAStor - cluster. If not specified, the default is 80. - #Static IP list with optional port -storage.root=castor domain=cloudstack 10.1.1.51 10.1.1.52 10.1.1.53 80 -#Zeroconf locator for cluster named "castor.example.com" -storage.root=castor domain=cloudstack zeroconf=castor.example.com - - - - - Create the tenant domain within the CAStor storage cluster. If you omit this step before - attempting to store content, you will get HTTP 412 errors in the awsapi.log. - - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/change-console-proxy-ssl-certificate-domain.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/change-console-proxy-ssl-certificate-domain.xml b/docs/en-US/change-console-proxy-ssl-certificate-domain.xml deleted file mode 100644 index 3fd0501..0000000 --- a/docs/en-US/change-console-proxy-ssl-certificate-domain.xml +++ /dev/null @@ -1,49 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Changing the Console Proxy SSL Certificate and Domain - If the administrator prefers, it is possible for the URL of the customer's console session to show a domain other than realhostip.com. The administrator can customize the displayed domain by selecting a different domain and uploading a new SSL certificate and private key. The domain must run a DNS service that is capable of resolving queries for addresses of the form aaa-bbb-ccc-ddd.your.domain to an IPv4 IP address in the form aaa.bbb.ccc.ddd, for example, 202.8.44.1. To change the console proxy domain, SSL certificate, and private key: - - Set up dynamic name resolution or populate all possible DNS names in your public IP range into your existing DNS server with the format aaa-bbb-ccc-ddd.company.com -> aaa.bbb.ccc.ddd. - Generate the private key and certificate signing request (CSR). When you are using openssl to generate private/public key pairs and CSRs, for the private key that you are going to paste into the &PRODUCT; UI, be sure to convert it into PKCS#8 format. - - Generate a new 2048-bit private keyopenssl genrsa -des3 -out yourprivate.key 2048 - Generate a new certificate CSRopenssl req -new -key yourprivate.key -out yourcertificate.csr - Head to the website of your favorite trusted Certificate Authority, purchase an SSL certificate, and submit the CSR. You should receive a valid certificate in return - Convert your private key format into PKCS#8 encrypted format.openssl pkcs8 -topk8 -in yourprivate.key -out yourprivate.pkcs8.encrypted.key - Convert your PKCS#8 encrypted private key into the PKCS#8 format that is compliant with &PRODUCT;openssl pkcs8 -in yourprivate.pkcs8.encrypted.key -out yourprivate.pkcs8.key - - - In the Update SSL Certificate screen of the &PRODUCT; UI, paste the following - - The Certificate you generated in the previous steps. - The Private key you generated in the previous steps. - The desired new domain name; for example, company.com - - - The desired new domain name; for example, company.comThis stops all currently running console proxy VMs, then restarts them with the new certificate and key. Users might notice a brief interruption in console availability - - The Management Server will generate URLs of the form "aaa-bbb-ccc-ddd.company.com" after this change is made. New console requests will be served with the new DNS domain name, certificate, and key -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/change-database-config.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/change-database-config.xml b/docs/en-US/change-database-config.xml deleted file mode 100644 index 567b9e4..0000000 --- a/docs/en-US/change-database-config.xml +++ /dev/null @@ -1,28 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Changing the Database Configuration - The &PRODUCT; Management Server stores database configuration information (e.g., hostname, port, credentials) in the file /etc/cloudstack/management/db.properties. To effect a change, edit this file on each Management Server, then restart the Management Server. -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/change-database-password.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/change-database-password.xml b/docs/en-US/change-database-password.xml deleted file mode 100644 index 863984e..0000000 --- a/docs/en-US/change-database-password.xml +++ /dev/null @@ -1,76 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Changing the Database Password - You may need to change the password for the MySQL account used by CloudStack. If so, you'll need to change the password in MySQL, and then add the encrypted password to /etc/cloudstack/management/db.properties. - - - Before changing the password, you'll need to stop CloudStack's management server and the usage engine if you've deployed that component. - -# service cloudstack-management stop -# service cloudstack-usage stop - - - - Next, you'll update the password for the CloudStack user on the MySQL server. - -# mysql -u root -p - - At the MySQL shell, you'll change the password and flush privileges: - -update mysql.user set password=PASSWORD("newpassword123") where User='cloud'; -flush privileges; -quit; - - - - The next step is to encrypt the password and copy the encrypted password to CloudStack's database configuration (/etc/cloudstack/management/db.properties). - - # java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar \ -org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh \ -input="newpassword123" password="`cat /etc/cloudstack/management/key`" \ -verbose=false - - -File encryption type - Note that this is for the file encryption type. If you're using the web encryption type then you'll use password="management_server_secret_key" - - - - Now, you'll update /etc/cloudstack/management/db.properties with the new ciphertext. Open /etc/cloudstack/management/db.properties in a text editor, and update these parameters: - -db.cloud.password=ENC(encrypted_password_from_above) -db.usage.password=ENC(encrypted_password_from_above) - - - - After copying the new password over, you can now start CloudStack (and the usage engine, if necessary). - - # service cloudstack-management start - # service cloud-usage start - - - -
http://git-wip-us.apache.org/repos/asf/cloudstack/blob/78517ee5/docs/en-US/change-host-password.xml ---------------------------------------------------------------------- diff --git a/docs/en-US/change-host-password.xml b/docs/en-US/change-host-password.xml deleted file mode 100644 index 7221fe6..0000000 --- a/docs/en-US/change-host-password.xml +++ /dev/null @@ -1,39 +0,0 @@ - - -%BOOK_ENTITIES; -]> - - - -
- Changing Host Password - The password for a XenServer Node, KVM Node, or vSphere Node may be changed in the database. Note that all Nodes in a Cluster must have the same password. - To change a Node's password: - - Identify all hosts in the cluster. - Change the password on all hosts in the cluster. Now the password for the host and the password known to &PRODUCT; will not match. Operations on the cluster will fail until the two passwords match. - - Get the list of host IDs for the host in the cluster where you are changing the password. You will need to access the database to determine these host IDs. For each hostname "h" (or vSphere cluster) that you are changing the password for, execute: - mysql> select id from cloud.host where name like '%h%'; - This should return a single ID. Record the set of such IDs for these hosts. - Update the passwords for the host in the database. In this example, we change the passwords for hosts with IDs 5, 10, and 12 to "password". - mysql> update cloud.host set password='password' where id=5 or id=10 or id=12; - -