cloudstack-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pdion...@apache.org
Subject [1/3] cloudstack-docs-install git commit: Storage section of install docs expanded and improved to give much more background information, images added
Date Sat, 11 Apr 2015 12:42:59 GMT
Repository: cloudstack-docs-install
Updated Branches:
  refs/heads/master bd34ce9dc -> c473994c9


Storage section of install docs expanded and improved to give much more background information,
images added

Signed-off-by: Pierre-Luc Dion <pdion891@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/cloudstack-docs-install/repo
Commit: http://git-wip-us.apache.org/repos/asf/cloudstack-docs-install/commit/f0f11681
Tree: http://git-wip-us.apache.org/repos/asf/cloudstack-docs-install/tree/f0f11681
Diff: http://git-wip-us.apache.org/repos/asf/cloudstack-docs-install/diff/f0f11681

Branch: refs/heads/master
Commit: f0f11681e8bb4f3445ae8536330b0b26d8074b93
Parents: bd34ce9
Author: Paul Angus <paul.angus@shapeblue.com>
Authored: Thu Apr 9 15:57:38 2015 +0100
Committer: Pierre-Luc Dion <pdion891@apache.org>
Committed: Sat Apr 11 08:13:29 2015 -0400

----------------------------------------------------------------------
 .../images/hypervisorcomms-secstorage.png       | Bin 0 -> 150919 bytes
 source/_static/images/hypervisorcomms.png       | Bin 0 -> 78498 bytes
 source/_static/images/subnetting storage.png    | Bin 0 -> 120501 bytes
 source/storage_setup.rst                        | 195 ++++++++++++++-----
 4 files changed, 151 insertions(+), 44 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cloudstack-docs-install/blob/f0f11681/source/_static/images/hypervisorcomms-secstorage.png
----------------------------------------------------------------------
diff --git a/source/_static/images/hypervisorcomms-secstorage.png b/source/_static/images/hypervisorcomms-secstorage.png
new file mode 100644
index 0000000..c5c1f8d
Binary files /dev/null and b/source/_static/images/hypervisorcomms-secstorage.png differ

http://git-wip-us.apache.org/repos/asf/cloudstack-docs-install/blob/f0f11681/source/_static/images/hypervisorcomms.png
----------------------------------------------------------------------
diff --git a/source/_static/images/hypervisorcomms.png b/source/_static/images/hypervisorcomms.png
new file mode 100644
index 0000000..5d4b0ab
Binary files /dev/null and b/source/_static/images/hypervisorcomms.png differ

http://git-wip-us.apache.org/repos/asf/cloudstack-docs-install/blob/f0f11681/source/_static/images/subnetting
storage.png
----------------------------------------------------------------------
diff --git a/source/_static/images/subnetting storage.png b/source/_static/images/subnetting
storage.png
new file mode 100644
index 0000000..a6f47d6
Binary files /dev/null and b/source/_static/images/subnetting storage.png differ

http://git-wip-us.apache.org/repos/asf/cloudstack-docs-install/blob/f0f11681/source/storage_setup.rst
----------------------------------------------------------------------
diff --git a/source/storage_setup.rst b/source/storage_setup.rst
index 63381a5..9df127a 100644
--- a/source/storage_setup.rst
+++ b/source/storage_setup.rst
@@ -17,12 +17,13 @@
 Storage Setup
 =============
 
-CloudStack is designed to work with a wide variety of commodity and
-enterprise-grade storage. Local disk may be used as well, if supported
-by the selected hypervisor. Storage type support for guest virtual disks
-differs based on hypervisor selection.
 
-.. cssclass:: table-striped table-bordered table-hover
+Primary Storage
+---------------
+
+CloudStack is designed to work with a wide variety of commodity and enterprise-rated storage
systems.
+CloudStack can also leverage the local disks within the hypervisor hosts if supported by
the selected
+hypervisor. Storage type support for guest virtual disks differs based on hypervisor selection.
 
 =============  ==============================  ==================  ===================================
 Storage Type   XenServer                       vSphere             KVM
@@ -33,35 +34,148 @@ Fiber Channel  Supported via Pre-existing SR   Supported           Supported
via
 Local Disk     Supported                       Supported           Supported
 =============  ==============================  ==================  ===================================
 
-The use of the Cluster Logical Volume Manager (CLVM) for KVM is not
-officially supported with CloudStack.
+The use of the Cluster Logical Volume Manager (CLVM) for KVM is not officially supported
with
+CloudStack.
+
+Secondary Storage
+-----------------
+
+CloudStack is designed to work with any scalable secondary storage system. The only requirement
is
+that the secondary storage system supports the NFS protocol. For large, multi-zone deployments,
S3
+compatible storage is also supported for secondary storage. This allows for secondary storage
which can
+span an entire region, however an NFS staging area must be maintained in each zone as most
hypervisors
+are not capable of directly mounting S3 type storage.
 
 
 Small-Scale Setup
------------------
+=================
 
-In a small-scale setup, a single NFS server can function as both primary
-and secondary storage. The NFS server just needs to export two separate
-shares, one for primary storage and the other for secondary storage.
+In a small-scale setup, a single NFS server can function as both primary and secondary storage.
The NFS
+server must export two separate shares, one for primary storage and the other for secondary
storage. This
+could be a VM or physical host running an NFS service on a Linux OS or a virtual software
appliance. Disk
+and network performance are still important in a small scale setup to get a good experience
when deploying,
+running or snapshotting VMs.
 
 
-Secondary Storage
------------------
+Large-Scale Setup
+=================
+
+In large-scale environments primary and secondary storage typically consist of independent
physical storage arrays.
 
-CloudStack is designed to work with any scalable secondary storage
-system. The only requirement is the secondary storage system supports
-the NFS protocol.
+Primary storage is likely to have to support mostly random read/write I/O once a template
has been
+deployed.  Secondary storage is only going to experience sustained sequential reads or writes.
 
-.. note::
-   The storage server should be a machine with a large number of disks. The 
-   disks should ideally be managed by a hardware RAID controller. Modern 
-   hardware RAID controllers support hot plug functionality independent of the 
-   operating system so you can replace faulty disks without impacting the 
-   running operating system.
+In clouds which will experience a large number of users taking snapshots or deploying VMs
at the
+same time, secondary storage performance will be important to maintain a good user experience.
+
+It is important to start the design of your storage with the a rough profile of the workloads
which it will
+be required to support. Care should be taken to consider the IOPS demands of your guest VMs
as much as the
+volume of data to be stored and the bandwidth (MB/s) available at the storage interfaces.
+
+Storage Architecture
+====================
+
+There are many different storage types available which are generally suitable for CloudStack
environments.
+Specific use cases should be considered when deciding the best one for your environment and
financial
+constraints often make the 'perfect' storage architecture economically unrealistic.
+
+Broadly, the architectures of the available primary storage types can be split into 3 types:
+
+Local Storage
+-------------
+
+Local storage works best for pure 'cloud-era' workloads which rarely need to be migrated
between storage
+pools and where HA of individual VMs is not required. As SSDs become more mainstream/affordable,
local
+storage based VMs can now be served with the size of IOPS which previously could only be
generated by
+large arrays with 10s of spindles. Local storage is highly scalable because as you add hosts
you would
+add the same proportion of storage. Local Storage is relatively inefficent as it can not
take advantage
+of linked clones or any deduplication.
+
+
+'Traditional' node-based Shared Storage
+---------------------------------------
+
+Traditional node-based storage are arrays which consist of a controller/controller pair attached
to a
+number of disks in shelves.
+Ideally a cloud architecture would have one of these physical arrays per CloudStack pod to
limit the
+'blast-radius' of a failure to a single pod.  This is often not economically viable, however
one should
+look to try to reduce the scale of any incident relative to any zone with any single array
where
+possible.  
+The use of shared storage enables workloads to be immediately restarted on an alternate host
should a
+host fail. These shared storage arrays often have the ability to create 'tiers' of storage
utilising
+say large SATA disks, 15k SAS disks and SSDs. These differently performing tiers can then
be presented as
+different offerings to users.
+The sizing of an array should take into account the IOPS required by the workload as well
as the volume
+of data to be stored.  One should also consider the number of VMs which a storage array will
be expected
+to support, and the maximum network bandwidth possible through the controllers.   
 
 
-Example Configurations
-----------------------
+Clustered Shared Storage
+------------------------
+
+Clustered shared storage arrays are the new generation of storage which do not have a single
set of
+interfaces where data enters and exits the array.  Instead it is distributed between all
of the active
+nodes giving greatly improved scalability and performance.  Some shared storage arrays enable
all data
+to continue to be accessible even in the event of the loss of an entire node.
+
+The network topology should be carefully considered when using clustered shared storage to
avoid creating
+bottlenecks in the network fabric.
+
+
+Network Configuration For Storage
+=================================
+
+Care should be taken when designing your cloud to take into consideration not only the performance
+of your disk arrays but also the bandwidth available to move that traffic between the switch
fabric and
+the array interfaces.
+
+CloudStack Networking For Storage
+---------------------------------
+
+The first thing to understand is the process of provisioning primary storage. When you create
a primary
+storage pool for any given cluster, the CloudStack management server tells each hosts’
hypervisor to
+mount the NFS share or (iSCSI LUN). The storage pool will be presented within the hypervisor
as a
+datastore (VMware), storage repository (XenServer/XCP) or a mount point (KVM), the important
point is
+that it is the hypervisor itself that communicates with the primary storage, the CloudStack
management
+server only communicates with the host hypervisor. Now, all hypervisors communicate with
the outside
+world via some kind of management interface – think VMKernel port on ESXi or ‘Management
Interface’ on
+XenServer. As the CloudStack management server needs to communicate with the hypervisor in
the host,
+this management interface must be on the CloudStack ‘management’ or ‘private’ network.
 There may be
+other interfaces configured on your host carrying guest and public traffic to/from VMs within
the hosts
+but the hypervisor itself doesn’t/can’t communicate over these interfaces.
+
+|hypervisorcomms.png: hypervisor storage communication|
+Figure 1: Hypervisor communications
+
+Separating Primary Storage traffic
+For those from a pure virtualisation background, the concept of creating a specific interface
for storage
+traffic will not be new; it has long been best practice for iSCSI traffic to have a dedicated
switch
+fabric to avoid any latency or contention issues.
+Sometimes in the cloud(Stack) world we forget that we are simply orchestrating processes
that the 
+hypervisors already carry out and that many ‘normal’ hypervisor configurations still
apply.
+The logical reasoning which explains how this splitting of traffic works is as follows:
+
+1. If you want an additional interface over which the hypervisor can communicate (excluding
teamed or bonded interfaces) you need to give it an IP address.
+#. The mechanism to create an additional interface that the hypervisor can use is to create
an additional management interface
+#. So that the hypervisor can differentiate between the management interfaces they have to
be in different (non-overlapping) subnets
+#. In order for the ‘primary storage’ management interface to communicate with the primary
storage, the interfaces on the primary storage arrays must be in the same CIDR as the ‘primary
storage’ management interface.
+#. Therefore the primary storage must be in a different subnet to the management network
+
+|subnetting storage.png: subnetted storage interfaces|
+Figure 2: Subnetting of Storage Traffic
+
+|hypervisorcomms-secstorage.png: separated hypervisor communications with secondary storage|
+Figure 3: Hypervisor Communications with Separated Storage Traffic
+
+Other Primary Storage Types
+If you are using PreSetup or SharedMountPoints to connect to IP based storage then the same
principles
+apply; if the primary storage and ‘primary storage interface’ are in a different subnet
to the ‘management
+subnet’ then the hypervisor will use the ‘primary storage interface’ to communicate
with the primary
+storage.
+
+
+Small-Scale Example Configurations
+----------------------------------
 
 In this section we go through a few examples of how to set up storage to
 work properly on a few types of NFS and iSCSI storage systems.
@@ -97,28 +211,18 @@ operating system version.
 
    Adjust the above command to suit your deployment needs.
 
-   -  **Limiting NFS export.** It is highly recommended that you limit
-      the NFS export to a particular subnet by specifying a subnet mask
-      (e.g.,”192.168.1.0/24”). By allowing access from only within the
-      expected cluster, you avoid having non-pool member mount the
-      storage. The limit you place must include the management
-      network(s) and the storage network(s). If the two are the same
-      network then one CIDR is sufficient. If you have a separate
-      storage network you must provide separate CIDR’s for both or one
-      CIDR that is broad enough to span both.
+-  **Limiting NFS export.** It is highly recommended that you limit the NFS export to a particular
subnet by specifying a subnet mask (e.g.,”192.168.1.0/24”). By allowing access from only
within the expected cluster, you avoid having non-pool member mount the storage. The limit
you place must include the management network(s) and the storage network(s). If the two are
the same network then one CIDR is sufficient. If you have a separate storage network you must
provide separate CIDR’s for both or one CIDR that is broad enough to span both.
 
-      The following is an example with separate CIDRs:
+  
+ The following is an example with separate CIDRs:
 
-      .. sourcecode:: bash
+ .. sourcecode:: bash
 
-         /export 192.168.1.0/24(rw,async,no_root_squash,no_subtree_check) 10.50.1.0/24(rw,async,no_root_squash,no_subtree_check)
+      /export 192.168.1.0/24(rw,async,no_root_squash,no_subtree_check) 10.50.1.0/24(rw,async,no_root_squash,no_subtree_check)
 
-   -  **Removing the async flag.** The async flag improves performance
-      by allowing the NFS server to respond before writes are committed
-      to the disk. Remove the async flag in your mission critical
-      production deployment.
+-  **Removing the async flag.** The async flag improves performance by allowing the NFS server
to respond before writes are committed to the disk. Remove the async flag in your mission
critical production deployment.
 
-#. Run the following command to enable NFS service.
+6. Run the following command to enable NFS service.
 
    .. sourcecode:: bash
 
@@ -157,9 +261,7 @@ operating system version.
    An NFS share called /export is now set up.
 
 .. note::
-   When copying and pasting a command, be sure the command has pasted as a 
-   single line before executing. Some document viewers may introduce unwanted 
-   line breaks in copied text.
+   When copying and pasting a command, be sure the command has pasted as a single line before
executing. Some document viewers may introduce unwanted line breaks in copied text.
 
 
 Linux NFS on iSCSI
@@ -245,3 +347,8 @@ Now you can set up /export as an NFS share.
    allowing the NFS server to respond before writes are committed to the
    disk. Remove the async flag in your mission critical production
    deployment.
+
+
+.. |hypervisorcomms.png: hypervisor storage communication| image:: ../_static/images/hypervisorcomms.png
+.. |subnetting storage.png: subnetted storage interfaces| image:: ../_static/images/subnetting
storage.png
+.. |hypervisorcomms-secstorage.png: hypervisor communications to secondary storage| image::
../_static/images/hypervisorcomms-secstorage.png
\ No newline at end of file


Mime
View raw message