cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Min Chen <>
Subject Re: Object based Secondary storage.
Date Mon, 17 Jun 2013 16:49:31 GMT

	Let me clarify, we didn't do extra compression before sending to S3. Only
when user provides a URL pointing to a compressed template during
registering, we will just download that template to S3 without
decompressing it afterwards as we did for NFS currently. If the register
url provided user is not compressed format, we will just send uncompressed
version to S3.


On 6/17/13 9:45 AM, "John Burwell" <> wrote:

>Why are objects being compressed before being sent to S3?
>On Jun 17, 2013, at 12:24 PM, Min Chen <> wrote:
>> Hi Tom,
>> 	Thanks for your testing. Glad to hear that multipart is working fine by
>> using Cloudian. Regarding your questions about .gz template, that
>> is as expected. We will upload it to S3 as its .gz format. Only when the
>> template is used and downloaded to primary storage, we will use staging
>> area to decompress it.
>> 	We will look at the bugs you filed and update them accordingly.
>> 	-min
>> On 6/17/13 12:31 AM, "Thomas O'Dowd" <> wrote:
>>> Thanks Min - I filed 3 small issues today. I've a couple more but I
>>> to try and repeat them again before I file them and I've no time right
>>> now. Please let me know if you need any further detail on any of these.
>>> An example of the other issues I'm running into are that when I upload
>>> an .gz template on regular NFS storage, it is automatically
>>> for me where as with S3 the template remains as a .gz file. Is this
>>> correct or not? Also, perhaps related but after successfully uploading
>>> the template to S3 and then trying to start an instance using it, I can
>>> select it and go all the way to the last screen where I think the
>>> button says launch instance or something and it fails with a resource
>>> unreachable error. I'll have to dig up the error later and file the bug
>>> as my machine got rebooted over the weekend.
>>> The multipart upload looks like it is working correctly though and I
>>> verify the checksums etc are correct with what they should be.
>>> Tom.
>>> On Fri, 2013-06-14 at 16:55 +0000, Min Chen wrote:
>>>> HI Tom,
>>>> 	You can file JIRA ticket for object_store branch by prefixing your
>>>> with "Object_Store_Refactor" and mentioning that it is using build
>>>> object_store. Here is an example bug filed from Sangeetha against
>>>> object_store branch build:
>>>> 	If you use devcloud for testing, you may run into an issue where ssvm
>>>> cannot access public url when you register a template, so register
>>>> template will fail. You may have to set up internal web server inside
>>>> devcloud and post template to be registered there to give a URL that
>>>> devcloud can access. We mainly used devcloud to run our TestNG
>>>> automation
>>>> test earlier, and then switched to real hypervisor for real testing.
>>>> 	Thanks
>>>> 	-min
>>>> On 6/14/13 1:46 AM, "Thomas O'Dowd" <> wrote:
>>>>> Edison,
>>>>> I've got devcloud running along with the object_store branch and I've
>>>>> finally been able to test a bit today.
>>>>> I found some issues (or things that I think are bugs) and would like
>>>>> file a few issues. I know where the bug database is and I have an
>>>>> account but what is the best way to file bugs against this particular
>>>>> branch? I guess I can select "Future" as the version? What other way
>>>> are
>>>>> feature branches usually identified in issues? Perhaps in the
>>>>> Please let me know the preference.
>>>>> Also, can you describe (or point me at a document) what the best way
>>>>> test against the object_store branch is? So far I have been doing the
>>>>> following but I'm not sure it is the best?
>>>>> a) setup devcloud.
>>>>> b) stop any instances on devcloud from previous runs
>>>>>     xe vm-shutdown --multiple
>>>>> c) check out and update the object_store branch.
>>>>> d) clean build as described in devcloud doc (ADIDD for short)
>>>>> e) deploydb (ADIDD)
>>>>> f) start management console (ADIDD) and wait for it.
>>>>> g) deploysvr (ADIDD) in another shell.
>>>>> h) on devcloud machine use xentop to wait for 2 vms to launch.
>>>>>   (I'm not sure what the nfs vm is used for here??)
>>>>> i) login on gui -> infra -> secondary and remove nfs secondary
>>>>> j) add s3 secondary storage (using cache of old secondary storage?)
>>>>> Then rest of testing starts from here... (and also perhaps in step j)
>>>>> Thanks,
>>>>> Tom.
>>>>> -- 
>>>>> Cloudian KK -
>>>>> Fancy 100TB of full featured S3 Storage?
>>>>> Checkout the Cloudian® Community Edition!
>>> -- 
>>> Cloudian KK -
>>> Fancy 100TB of full featured S3 Storage?
>>> Checkout the Cloudian® Community Edition!

View raw message