cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas O'Dowd (JIRA)" <>
Subject [jira] [Commented] (CLOUDSTACK-3229) Object_Store_Refactor - Snapshot fails due to an internal error
Date Thu, 01 Aug 2013 00:21:48 GMT


Thomas O'Dowd commented on CLOUDSTACK-3229:

Hi John,

Thanks. I'll test this today. 

Is there any reason you didn't also fix "isHttps" ? I don't see it in the patch so I presume
this will still be broken. (I only fixed my s3xen locally on my devcloud machine just so I
could get passed that problem). I presume the proper fix would be to also pass it along to
the plugin rather than fixing the plugin to handle its absense?

> Object_Store_Refactor - Snapshot fails due to an internal error
> ---------------------------------------------------------------
>                 Key: CLOUDSTACK-3229
>                 URL:
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>    Affects Versions: 4.2.0
>         Environment: chrome on linux 
> devcloud 
> Cloudian or Amazon S3 Object store
>            Reporter: Thomas O'Dowd
>            Priority: Critical
> Assuming initial devcloud state... 
> I added a cache for the S3 storage like this. 
> on devcloud machine as root: 
> # mkdir /opt/storage/cache 
> # vi /etc/exports (and append this line) 
> /opt/storage/cache *(rw,no_subtree_check,no_root_squash,fsid=9999) 
> # exportfs -a 
> On Mgmt server GUI: 
> 1. navigate to infrastructure -> secondary storage 
> 2. delete the NFS SS. 
> 3. add S3 storage for Cloudian (I used 60000 as the timeouts - assuming millis). I used
the /opt/storage/cache thing as the s3 cache.
> 4. nav to templates 
> 5. register a new template (I uploaded tinyLinux again as "mytiny" (5.3 64bit)). 
> 6. confirm with s3cmd that 2 objects are now on S3. 
> --------- s3 objects ------- 
> template/tmpl/1/1/routing-1/acton-systemvm-02062012.vhd.bz2 2013-06-27T03:01:46.203Z
None 140616708 "b533e7b65219439ee7fca0146ddd7ffa-27" 
> template/tmpl/2/201/201-2-ae9e9409-4c8e-3ad8-a62f-abec7a35fe26/tinylinux.vhd 2013-06-27T03:04:06.730Z
None 50430464 "4afac316e865adf74ca1a8039fae7399-10" 
> --------- s3 objects ------- 
> 7. I restarted the management server at this point which actually resulted in another
object on S3. 
> --------- the new s3 object ------- 
> template/tmpl/1/5/tiny Linux/ttylinux_pv.vhd 2013-06-27T03:43:26.494Z None 50430464 "4afac316e865adf74ca1a8039fae7399-10"

> --------- the new s3 object ------- 
> 8. Go to instance and create a new choosing the "mytiny" template which we registered.

> 9. launch it after selecting all defaults. 
> 10. wait until it starts.
> 11. nav to storage. I see ROOT-8. Click on this to open.
> 12. click the camera to take the snapshot.
> after a pause I get a popup
>      "Failed to create snapshot due to an internal error creating snapshot for volume
> Also on the mgmt terminal I get the following log entry (only 1):
>     INFO  [user.snapshot.CreateSnapshotCmd] (Job-Executor-8:job-16) VOLSS: createSnapshotCmd
> If I check the "view snapshots" button under storage, I can however see the snapshot.
It says its on primary. I'm expecting it to go to secondary storage though. Nothing is in
the S3 logs and no snapshots.
> If I try to delete that snapshot from here I get this error in the logs:
> ERROR [cloud.async.AsyncJobManagerImpl] (Job-Executor-12:job-20) Unexpected exception
while executing org.apache.cloudstack.api.command.user.snapshot.DeleteSnapshotCmd
> Failed to delete
Can't delete snapshotshot 4 due to it is not in BackedUp Status
>         at
>         at$InterceptorDispatcher.intercept(
>         at org.apache.cloudstack.api.command.user.snapshot.DeleteSnapshotCmd.execute(
>         at
>         at$
>         at java.util.concurrent.Executors$
>         at java.util.concurrent.FutureTask$Sync.innerRun(
>         at
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>         at java.util.concurrent.ThreadPoolExecutor$
>         at
> If I navigate to instance, my instance, and try to take a vm snapshot from here, I get
a different pop-up which says:
>    "There is other active volume snapshot tasks on the instance to which the volume is
attached, please try again later"
> And I get an exception:
> ERROR [cloud.api.ApiServer] (352129314@qtp-2110413789-32:) unhandled exception executing
api command: createVMSnapshot
> There is other active volume snapshot
tasks on the instance to which the volume is attached, please try again later.
>         at
>         at org.apache.cloudstack.api.command.user.vmsnapshot.CreateVMSnapshotCmd.create(
>         at
>         at
>         at
>         at
>         at
>         at javax.servlet.http.HttpServlet.service(
>         at javax.servlet.http.HttpServlet.service(
>         at org.mortbay.jetty.servlet.ServletHolder.handle(
>         at org.mortbay.jetty.servlet.ServletHandler.handle(
>         at
>         at org.mortbay.jetty.servlet.SessionHandler.handle(
>         at org.mortbay.jetty.handler.ContextHandler.handle(
>         at org.mortbay.jetty.webapp.WebAppContext.handle(
>         at org.mortbay.jetty.handler.ContextHandlerCollection.handle(
>         at org.mortbay.jetty.handler.HandlerCollection.handle(
>         at org.mortbay.jetty.handler.HandlerWrapper.handle(
>         at org.mortbay.jetty.Server.handle(
>         at org.mortbay.jetty.HttpConnection.handleRequest(
>         at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(
>         at org.mortbay.jetty.HttpParser.parseNext(
>         at org.mortbay.jetty.HttpParser.parseAvailable(
>         at org.mortbay.jetty.HttpConnection.handle(
>         at
>         at org.mortbay.thread.QueuedThreadPool$
> There are no requests going to the S3 storage for the snap-shotting that I can see and
its the only secondary storage that I have setup.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message