cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Burwell <>
Subject [DISCUSS/PROPOSAL] CCC13 Hackfest: Storage Architecture Summary
Date Mon, 08 Jul 2013 18:47:17 GMT

During the CloudStack Collab 2013 Hackfest, a group of users and developers got together to
discuss the current storage architecture and ideas for future evolution.  The group focused
on the following topics:

	* Storage architecture overview and 4.2 enhancements
	* Storage use cases and deployment models
	* Vendor driver needs
	* Prioritization of desired storage enhancements

From the storage enhancement prioritization discussion, I would like to bring forward the
following storage prioritization proposal for the next (and possibly subsequent) CloudStack

	1. Breaking Storage -> Hypervisor Layer Dependencies:  Currently, the storage layer includes
hypervisor-specific code for each supported hypervisor.  Additionally, the hypervisor layer
includes storage-type specific code for each storage type.  This circular dependency bloats
the storage layer and greatly complicates storage device driver implementation.  Additionally,
it makes future enhancement much more complex and risky.  This effort would represent a set
of enhancements to break down the storage layer into a set of composable primitives that would
be consumed by dependent layers (e.g. Hypervisor).  I will initiate a separate discussion
thread to flesh out the nature of the dependencies and high-level approaches to addressing
	2. Streamlined Storage Driver Model: As part of the hypervisor/storage decoupling, refactoring
the storage device driver model to support a set of basic, composable operations that function
in terms of logical URIs and Java I/O streams.  As we discussed storage devices, we realized
they minimally perform seven I/O operations -- read, write, copyTo, clone, delink (non-destructive
delete), destroy (destructive delete), and list (?) with the relevant paths expressed as URIs.
 Additionally, drivers would describe their capabilities (e.g. manageable, snapshotable, etc).
 I plan to include my options on this topic as part of the storage -> hypervisor discussion
decoupling thread.
	3. Storage Device Maintenance Mode:  Provide a generalized orchestration mechanism to put
a storage device into maintenance mode.  This capability would likely include an asynchronous
internal API for storage layer clients (e.g. Hypervisor) to be notified when a device plans
to go into maintenance mode and, if necessary, abort it.
	4. Generic Properties/Details:  Enhance the DataStore support storing a property bag of additional
configuration information specific to the associated storage device driver.  In order to support
proper validation and UI display of this information, the storage device driver model would
include a mechanism to describe the nature of the properties and callbacks to perform runtime
validation of the property bag before persistence.  Finally, storage orchestration would ensure
that this information is always passed into the driver for each operation.
	5. Backup/Storage Snapshots:  Support transfer of storage snapshots from device to device
(e.g. from a SAN to an object store).  Dependent on the flexibility of the streamlined storage
driver enhancements, this capability may be able to implemented completely in the orchestration
layer.  If the Storage/Hypervisor Decoupling work does not split the notions of storage and
hypervisor snapshots, this enhancement would likely require it. 

For those in attendance, please correct and/or expand on my capture/recollection.  


P.S. I have CC'ed the users@ list to gain notice of the users involved for their thoughts/feedback
as well.

View raw message