cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justyn Shull <just...@codero.com>
Subject Snapshots not working when using S3 (ceph) ACS 4.4.0
Date Thu, 13 Nov 2014 18:04:45 GMT
I’m trying to enable object storage (ceph using the S3 radosgw) as a secondary store for
an existing cloudstack installation, and running into some issues.    There was already an
existing NFS store being used as the secondary storage, so I used the updateCloudToUseObjectStore
api call (using cloud monkey) with these params(keys changed):

> update cloudtouseobjectstore name=cephs3 zoneId=749cde04-531a-4e1f-bfa2-ad7f7854b1f8
url=https://10.16.33.172 details[0].key=accesskey details[0].value=xxx details[1].key=secretkey
details[1].value=xxx details[2].key=bucket details[2].value=CLOUDSTACK details[3].key=endpoint
details[3].value=10.16.33.172 provider=S3

1) There were no errors from that call, and as far as I can tell it changed the old NFS store
to ‘ImageCache’ and created the new s3 store in the database.  I’m not sure what else
to check for to see if that was successful, or if there are any long-running processes that
it triggers..

 However, I tried creating a volume snapshot to test and that is where I’m running into
issues now.     Cloudstack appears to create the snapshot on NFS (I think this part is normal),
but then when it goes to upload the snapshot to S3 - it’s using the wrong local path.  
This is the log from the hypervisor (XenServer 6.1.0 w/ local storage):

###
2014-11-13 10:23:17    DEBUG [root] #### VMOPS enter s3 #### ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Enetered parseArguments with args: {'maxErrorRetry':
'null', 'key': 'snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9', 'maxSingleUploadSizeInBytes':
'5368709120', 'accessKey': ‘xxx', 'bucket': 'CLOUDSTACK', 'filename': '/dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9',
'secretKey': ‘xxx', 'socketTimeout': 'null', 'endPoint': '10.16.33.172', 'https': 'false',
'connectionTimeout': 'null', 'operation': 'put', 'iSCSIFlag': 'true'} ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Operation put on file /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9
from/in bucket CLOUDSTACK key snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Traceback (most recent call last):
  File "/etc/xapi.d/plugins/s3xen", line 414, in s3
    client.put(bucket, key, filename, maxSingleUploadBytes)
  File "/etc/xapi.d/plugins/s3xen", line 325, in put
    raise Exception(
Exception: Attempt to put /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9
that does not exist.
 ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS exit s3 with result false ####
###

2) I’m assuming it should be trying to upload the .vhd it created on the nfs store, but
I am not sure whether it’s a config issue or some sort of bug somewhere that is causing
this.   

3) Am I correct in assuming the general snapshot flow should be like this?
	(all on hypervisor)  Mount NFS -> create .vhd snapshot from localstorage/lvm -> upload
.vhd from NFS to S3/objstor -> delete .vhd from nfs

Any help would be appreciated,

Thanks,
Mime
View raw message