cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas O'Dowd (JIRA)" <>
Subject [jira] [Commented] (CLOUDSTACK-3229) Object_Store_Refactor - Snapshot fails due to an internal error
Date Wed, 31 Jul 2013 08:05:50 GMT


Thomas O'Dowd commented on CLOUDSTACK-3229:

I put in a local fix on devcloud by changing s3xen code to work even if isHttps was not available.

----------------------------- fixed -------------------------------
def parseArguments(args):

    # The keys in the args map will correspond to the properties defined on
    # the interface
    if args.has_key('isHttps'):
        isHttps = args['isHttps']
        isHttps = False
    client = S3Client(
        args['accessKey'], args['secretKey'], args['endPoint'],
        isHttps, args['connectionTimeout'], args['socketTimeout'])
----------------------------- fixed -------------------------------

Then, I tried snapshoting again just to see if it would work. 

Nope... unfortunately, I get a new exception. This time the error is:

           errorInfo: [XENAPI_PLUGIN_FAILURE, s3, KeyError, 'bucket']

So it passed the isHttps check, but now gets stuck on bucket! Looking down that function a
bit more, I can see:

    operation = args['operation']
    bucket = args['bucket']
    key = args['key']
    filename = args['filename']

Checking the previous dump of args in the log output, sure enough there is no 'bucket' key

Let's look at my actual S3 configuration as available from Infrastructure secondary storage

Details	connectiontimeout: 600000, bucket: images, usehttps: false, sockettimeout: 600000,
endpoint:, secretkey: pe5sLXnndBnsOHiMT/IhbJW995kAy1/+HK9+14Uc, accesskey:
00ba9a7f9a8142b070c3, maxerrorretry: 5
ID	ae9cd12e-e366-4bd9-a524-4b79f054fce1

We can see that bucket is set to "images" and "usehttps" (not isHttps) is set to false.

Anyway, I don't think I can take this any further right now without more help. Does this really
work with xenserver?
> Object_Store_Refactor - Snapshot fails due to an internal error
> ---------------------------------------------------------------
>                 Key: CLOUDSTACK-3229
>                 URL:
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>    Affects Versions: 4.2.0
>         Environment: chrome on linux 
> devcloud 
> Cloudian or Amazon S3 Object store
>            Reporter: Thomas O'Dowd
>            Priority: Critical
> Assuming initial devcloud state... 
> I added a cache for the S3 storage like this. 
> on devcloud machine as root: 
> # mkdir /opt/storage/cache 
> # vi /etc/exports (and append this line) 
> /opt/storage/cache *(rw,no_subtree_check,no_root_squash,fsid=9999) 
> # exportfs -a 
> On Mgmt server GUI: 
> 1. navigate to infrastructure -> secondary storage 
> 2. delete the NFS SS. 
> 3. add S3 storage for Cloudian (I used 60000 as the timeouts - assuming millis). I used
the /opt/storage/cache thing as the s3 cache.
> 4. nav to templates 
> 5. register a new template (I uploaded tinyLinux again as "mytiny" (5.3 64bit)). 
> 6. confirm with s3cmd that 2 objects are now on S3. 
> --------- s3 objects ------- 
> template/tmpl/1/1/routing-1/acton-systemvm-02062012.vhd.bz2 2013-06-27T03:01:46.203Z
None 140616708 "b533e7b65219439ee7fca0146ddd7ffa-27" 
> template/tmpl/2/201/201-2-ae9e9409-4c8e-3ad8-a62f-abec7a35fe26/tinylinux.vhd 2013-06-27T03:04:06.730Z
None 50430464 "4afac316e865adf74ca1a8039fae7399-10" 
> --------- s3 objects ------- 
> 7. I restarted the management server at this point which actually resulted in another
object on S3. 
> --------- the new s3 object ------- 
> template/tmpl/1/5/tiny Linux/ttylinux_pv.vhd 2013-06-27T03:43:26.494Z None 50430464 "4afac316e865adf74ca1a8039fae7399-10"

> --------- the new s3 object ------- 
> 8. Go to instance and create a new choosing the "mytiny" template which we registered.

> 9. launch it after selecting all defaults. 
> 10. wait until it starts.
> 11. nav to storage. I see ROOT-8. Click on this to open.
> 12. click the camera to take the snapshot.
> after a pause I get a popup
>      "Failed to create snapshot due to an internal error creating snapshot for volume
> Also on the mgmt terminal I get the following log entry (only 1):
>     INFO  [user.snapshot.CreateSnapshotCmd] (Job-Executor-8:job-16) VOLSS: createSnapshotCmd
> If I check the "view snapshots" button under storage, I can however see the snapshot.
It says its on primary. I'm expecting it to go to secondary storage though. Nothing is in
the S3 logs and no snapshots.
> If I try to delete that snapshot from here I get this error in the logs:
> ERROR [cloud.async.AsyncJobManagerImpl] (Job-Executor-12:job-20) Unexpected exception
while executing org.apache.cloudstack.api.command.user.snapshot.DeleteSnapshotCmd
> Failed to delete
Can't delete snapshotshot 4 due to it is not in BackedUp Status
>         at
>         at$InterceptorDispatcher.intercept(
>         at org.apache.cloudstack.api.command.user.snapshot.DeleteSnapshotCmd.execute(
>         at
>         at$
>         at java.util.concurrent.Executors$
>         at java.util.concurrent.FutureTask$Sync.innerRun(
>         at
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>         at java.util.concurrent.ThreadPoolExecutor$
>         at
> If I navigate to instance, my instance, and try to take a vm snapshot from here, I get
a different pop-up which says:
>    "There is other active volume snapshot tasks on the instance to which the volume is
attached, please try again later"
> And I get an exception:
> ERROR [cloud.api.ApiServer] (352129314@qtp-2110413789-32:) unhandled exception executing
api command: createVMSnapshot
> There is other active volume snapshot
tasks on the instance to which the volume is attached, please try again later.
>         at
>         at org.apache.cloudstack.api.command.user.vmsnapshot.CreateVMSnapshotCmd.create(
>         at
>         at
>         at
>         at
>         at
>         at javax.servlet.http.HttpServlet.service(
>         at javax.servlet.http.HttpServlet.service(
>         at org.mortbay.jetty.servlet.ServletHolder.handle(
>         at org.mortbay.jetty.servlet.ServletHandler.handle(
>         at
>         at org.mortbay.jetty.servlet.SessionHandler.handle(
>         at org.mortbay.jetty.handler.ContextHandler.handle(
>         at org.mortbay.jetty.webapp.WebAppContext.handle(
>         at org.mortbay.jetty.handler.ContextHandlerCollection.handle(
>         at org.mortbay.jetty.handler.HandlerCollection.handle(
>         at org.mortbay.jetty.handler.HandlerWrapper.handle(
>         at org.mortbay.jetty.Server.handle(
>         at org.mortbay.jetty.HttpConnection.handleRequest(
>         at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(
>         at org.mortbay.jetty.HttpParser.parseNext(
>         at org.mortbay.jetty.HttpParser.parseAvailable(
>         at org.mortbay.jetty.HttpConnection.handle(
>         at
>         at org.mortbay.thread.QueuedThreadPool$
> There are no requests going to the S3 storage for the snap-shotting that I can see and
its the only secondary storage that I have setup.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message