cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tanner Danzey (JIRA)" <j...@apache.org>
Subject [jira] [Closed] (CLOUDSTACK-6397) S3 Uploads to Rados are seemingly capped at 5GB internally & other errors
Date Sun, 11 May 2014 03:16:15 GMT

     [ https://issues.apache.org/jira/browse/CLOUDSTACK-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Tanner Danzey closed CLOUDSTACK-6397.
-------------------------------------

    Resolution: Not a Problem

The problem is not using Ceph's pre-built apache2 & fastcgi packages that include support
for 100-Continue.

> S3 Uploads to Rados are seemingly capped at 5GB internally & other errors
> -------------------------------------------------------------------------
>
>                 Key: CLOUDSTACK-6397
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6397
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>          Components: Management Server
>    Affects Versions: 4.2.1, 4.3.0, 4.4.0
>         Environment: Ubuntu Server 13.10, Cloudstack 4.2.1, 4.3.0 and 4.4-snapshot. Ceph
0.72 w/ Rados Gateway setup
>            Reporter: Tanner Danzey
>              Labels: ceph, kvm, primary, rbd, s3, secondary, snapshots, storage, templates
>
> From 4.2.1 (At least) to 4.4-snapshot it seems to me that snapshots uploaded to an s3
secondary storage pool are limited to 5GB. Regardless of whether you set your single-part
size limit to 4TB, enable multi-part uploads, only snapshots that are smaller than or exactly
5GB are able to be uploaded from secondary storage VM. Here is the log output from a 20GB
snapshot upload to S3 on version 4.4-snapshot with s3.singleupload.max.size set to 1 (multipart,
1GB maximum part size):
> 2014-04-13 01:42:48,007 DEBUG [c.c.s.s.SnapshotManagerImpl] (Work-Job-Executor-3:job-70/job-71
ctx-90a1bdf1) Failed to create snapshot
> com.cloud.utils.exception.CloudRuntimeException: failed to uploadsnapshots/2/3/d44ffa8d-f190-4f1b-8b98-66597d928265com.amazonaws.AmazonClientE
> xception: Unable to unmarshall error response (White spaces are required between publicId
and systemId.)
>         at org.apache.cloudstack.storage.snapshot.SnapshotServiceImpl.backupSnapshot(SnapshotServiceImpl.java:282)
>         at org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.backupSnapshot(XenserverSnapshotStrategy.java:137)
>         at org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.takeSnapshot(XenserverSnapshotStrategy.java:300)
>         at com.cloud.storage.snapshot.SnapshotManagerImpl.takeSnapshot(SnapshotManagerImpl.java:925)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>         at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>         at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>         at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
>         at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
>         at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
>         at com.sun.proxy.$Proxy177.takeSnapshot(Unknown Source)
>         at org.apache.cloudstack.storage.volume.VolumeServiceImpl.takeSnapshot(VolumeServiceImpl.java:1503)
>         at com.cloud.storage.VolumeApiServiceImpl.orchestrateTakeVolumeSnapshot(VolumeApiServiceImpl.java:1731)
>         at com.cloud.storage.VolumeApiServiceImpl.orchestrateTakeVolumeSnapshot(VolumeApiServiceImpl.java:2465)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
>         at com.cloud.storage.VolumeApiServiceImpl.handleVmWorkJob(VolumeApiServiceImpl.java:2473)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
>         at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
>         at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
>         at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
>         at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
>         at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
>         at com.sun.proxy.$Proxy181.handleVmWorkJob(Unknown Source)
>         at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
>         at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:495)
>         at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>         at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>         at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>         at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>         at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>         at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:452)
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> The snapshot is successfully created (State CreatedOnPrimary) from RBD and prepared for
upload but fails thereafter (State Error).
> A similar error occurs with the s3 option set to -1 (only singlepart) but the error code
is that the S3 endpoint gave a HTTP 400 error (BadRequest) and it states that the error was
caused by "EntityTooLarge" which could easily be a server side configuration issue I am not
aware of, hence the footnote mention
> Our Rados setup functions 100% correctly when accessed with S3Browser and s3cmd (multi
& single part uploads tested), so it would seem that the s3 server configuration isn't
at fault.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message