cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Burwell <>
Subject Re: [MERGE]object_store branch into master
Date Mon, 20 May 2013 19:56:14 GMT

Since this change is so large, it makes reviewing and commenting in detail extremely difficult.
 Would it be possible to push this patch through Review Board to ease comprehension and promote
a conversation about this patch?

Reading through the FS, I have the following questions regarding the operation of the NFS

What happens if/when the disk space of the NFS cache is exhausted?  What are the sizing recommendations/guidelines
for it?
What strategy is used to age files out of the NFS cache?
If two processes, process1 and process2, are both using a template, templateA, will both processes
reference the same file in the NFS cache?  If they are reading from the same file and process1
finishes before process2, will process1 attempt to delete process2?
If a file transfer from the NFS cache to the object store fails, what is the recovery/retry
strategy?  What durability guarantees will CloudStack supply when a snapshot, template, or
ISO is in the cache, but can't be written to the object store?
What will be the migration strategy for the objects contained in S3 buckets/Swift containers
from pre-4.2.0 instances?  Currently, CloudStack tracks a mapping between these objects and
templates/ISOs in the template_switt_ref and template_s3_ref table.  

Finally, does the S3 implementation use multi-part upload to transfer files to the object
store?  If not, the implementation will be limited to storing files no larger than 5GB in


On May 20, 2013, at 1:52 PM, Chip Childers <> wrote:

> On Fri, May 17, 2013 at 08:19:57AM -0400, David Nalley wrote:
>> On Fri, May 17, 2013 at 4:11 AM, Edison Su <> wrote:
>>> Hi all,
>>>     Min and I worked on object_store branch during the last one and half month.
We made a lot of refactor on the storage code, mostly related to secondary storage, but also
on the general storage framework. The following goals are made:
>>> 1.       An unified storage framework. Both secondary storages(nfs/s3/swift etc)
and primary storages will share the same plugin model, the same interface. Add any other new
storages into cloudstack will much easier and straightforward.
>>> 2.       The storage interface  between mgt server and resource is unified, currently
there are only 5 commands send out by mgt server: copycommand/createobjectcommand/deletecommand/attachcommand/dettachcommand,
and each storage vendor can decode/encode all the entities(volume/snapshot/storage pool/ template
etc) by its own.
>>> 3.       NFS secondary storage is not explicitly depended on by other components.
For example, when registering template into S3, template will be write into S3 directly, instead
of storing into nfs secondary storage, then push to S3. If s3 is used as secondary storage,
then nfs storage will be used as cache storage, but from other components point of view, cache
storage is invisible. So, it's possible to make nfs storage as optional if s3 is used for
certain hypervisors.
>>> The detailed FS is at
>>> The test we did:
>>> 1.       We modified marvin to use new storage api
>>> 2.       Test_volume and test_vm_life_cycle, test_template under smoke test folder
are executed against xenserver/kvm/vmware and devcloud, some of them are failed, it's partly
due to bugs introduced by our code, partly master branch itself has issue(e.g. resizevolume
doesn't work). We want to fix these issues after merging into master.
>>> The basic follow does work: create user vm, attach/detach volume, register template,
create template from volume/snapshot, take snapshot, create volume from snapshot.
>>>  It's a huge change, around 60k LOC patch, to review the code, you can try: git
diff master..object_store, will show all the diff.
>>>  Comments/feedback are welcome. Thanks.
>> Given the amount of change, can we get at least a BVT run against your
>> branch done before merge?
>> --David
> +1 to BVT please.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message