cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Kahn <jk...@idea11.com.au>
Subject Re: CloudStack and XenServer 6.0.2 - stray snapshots on primary storage
Date Fri, 24 Aug 2012 21:12:22 GMT
Hi Anthony,

Thanks for that explanation. I don't think that's what's happening here
though. This is definitely occurring from CloudStack snapshots rather than
provisioning. The 400GB disk is a data disk. It's very heavily used so it
wouldn't surprise me if every block has been touched by the guest VM. It
has a daily scheduled snapshot.

I've run through this scenario with a test VM/disk. It only has a 50GB
root disk.
- Provision VM - creates root disk (e.g. ROOT-1234)
- Snapshot disk
- Snapshot process creates XS snapshot (GUID_ROOT-1234_timestampA)
- Snapshot is copied to secondary storage
- Snapshot operation ends, XS snapshot (GUID_ROOT-1234_timestampA) remains
on primary storage

Performing a second snapshot operation does the following:
- Snapshot disk
- Snapshot process creates XS snapshot (GUID_ROOT-1234_timestampA)
- Snapshot is copied to secondary storage
- Snapshot process deletes previous XS snapshot
(GUID_ROOT-1234_timestampA) from primary storage.
- Snapshot operation ends, XS snapshot (GUID_ROOT-1234_timestampA) remains
on primary storage


Subsequently, deleting both snapshots from CloudStack does not remove the
stray snapshot from primary storage. It's now 36 hours after I ran this
test and snapshot is still present on primary storage.

Thanks,
JK



-----Original Message-----
From: Anthony Xu <Xuefei.Xu@citrix.com>
Reply-To: "cloudstack-users@incubator.apache.org"
<cloudstack-users@incubator.apache.org>
Date: Saturday, 25 August 2012 2:06 AM
To: "cloudstack-users@incubator.apache.org"
<cloudstack-users@incubator.apache.org>
Subject: RE: CloudStack and XenServer 6.0.2 - stray snapshots on primary
storage

>
>Hi James,
>
>For root disk thin-provision, some snapshots are used as templates.
>
>After you create a VM1 on a template, the VHD chain looks like,
>
>     (base disk)
>     /         \
>(template)   (disk for vm1)
>
>After you create a VM2 on the same template
>     (base disk)
>     /         \            \
>(template)   (disk for vm1)  (disk for vm2)
>
>
>This is only for root disk derived from template. In this way, CloudStack
>can deploy VM fast, no full disk copy
>
>> so this is a real issue for us. On that volume a 400GB VDI consumes
>> 800GB
>> - 400GB for its base disk, and 400GB for the snapshot disk.
>
>Is a root disk?
>What's the template size, Have you shrunk the template before uploading
>to CloudStack? 
>A shrunk VHD file has the size about space being used.
>
>-Anthony
>
>
>
>
>> -----Original Message-----
>> From: James Kahn [mailto:jkahn@idea11.com.au]
>> Sent: Thursday, August 23, 2012 3:47 AM
>> To: cloudstack-users@incubator.apache.org
>> Subject: CloudStack and XenServer 6.0.2 - stray snapshots on primary
>> storage
>> 
>> Stray CloudStack generated snapshots on primary storage are causing
>> significant storage use on our XenServer environment. Is this expected
>> behaviour, a bug, or are we encountering an environmental issue? Is
>> anybody else seeing this?
>> 
>> One particular storage volume has over 1TB in use, with 659GB allocated
>> 
>> so this is a real issue for us. On that volume a 400GB VDI consumes
>> 800GB
>> - 400GB for its base disk, and 400GB for the snapshot disk.
>> 
>> Pretty much every primary storage volume is affected. Snapshots are
>> exported successfully to secondary storage.
>> 
>> Some details on our environment:
>> CloudStack 3.0.1
>> XenServer 6.0.2
>> iSCSI primary storage (CloudStack managed)
>> 
>> The snapshots also seem to be recently current, as shown:
>> 
>> # xe vdi-list sr-uuid=1ddf05ad-133e-a275-90de-8b03fb69d114
>> is-a-snapshot=true params=uuid,name-label,snapshot-time
>> uuid ( RO)             : fb9210b9-25e5-46fd-a747-26e0dc536981
>>        name-label ( RW):
>> 034ef007-b6a5-40f0-81a0-6f59953a59eb_ROOT-1240_20120423023335
>>     snapshot-time ( RO): 20120423T02:33:37Z
>> 
>> 
>> uuid ( RO)             : ea5392b0-8921-46ca-b74f-c16aa8e78466
>>        name-label ( RW): Template routing-1
>>     snapshot-time ( RO): 20120404T05:10:49Z
>> 
>> 
>> uuid ( RO)             : eba80a35-2acc-4228-905d-380a074135de
>>        name-label ( RW):
>> 511f0f27-d130-4bf3-801d-3c2248efcfe0_DATA-1229_20120822180201
>>     snapshot-time ( RO): 20120822T18:02:04Z
>> 
>> 
>> uuid ( RO)             : 420c397e-8828-4b80-88ff-1db141cc7d16
>>        name-label ( RW): Template 98255702-1359-42ae-b635-ad7eacd09e5c
>>     snapshot-time ( RO): 20120411T23:28:35Z
>> 
>> 
>> uuid ( RO)             : b606c514-a042-4493-a0a7-07c7c5f66d3a
>>        name-label ( RW):
>> 511f0f27-d130-4bf3-801d-3c2248efcfe0_ROOT-1229_20120822180201
>>     snapshot-time ( RO): 20120822T18:02:21Z
>> 
>> 
>> uuid ( RO)             : 14a75d57-8e1b-4ee7-b1b8-d069362332e9
>>        name-label ( RW): Template 5978eab4-166c-42f1-aeb6-a4d6bb8bb5f9
>>     snapshot-time ( RO): 20120412T05:48:58Z
>> 
>> 
>> uuid ( RO)             : 90559f54-e35a-48e3-9ce0-5e9d8b4e5587
>>        name-label ( RW):
>> ff484c2e-2b8c-4c73-9b54-da404cfa962e_ROOT-1232_20120822150201
>>     snapshot-time ( RO): 20120822T15:02:04Z
>> 
>> 
>> Any ideas?
>> 
>> Thanks,
>> JK.
>> 
>> 
>> 
>
>



Mime
View raw message