Return-Path: X-Original-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 37C21D49C for ; Fri, 24 Aug 2012 21:12:57 +0000 (UTC) Received: (qmail 43064 invoked by uid 500); 24 Aug 2012 21:12:57 -0000 Delivered-To: apmail-incubator-cloudstack-users-archive@incubator.apache.org Received: (qmail 43005 invoked by uid 500); 24 Aug 2012 21:12:56 -0000 Mailing-List: contact cloudstack-users-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-users@incubator.apache.org Delivered-To: mailing list cloudstack-users@incubator.apache.org Received: (qmail 42995 invoked by uid 99); 24 Aug 2012 21:12:56 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 24 Aug 2012 21:12:56 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW X-Spam-Check-By: apache.org Received-SPF: unknown amxa:i11-www2.idea11.com.auinclude:_spf.getharvest.com?all (athena.apache.org: encountered unrecognized mechanism during SPF processing of domain of jkahn@idea11.com.au) Received: from [216.32.181.184] (HELO ch1outboundpool.messaging.microsoft.com) (216.32.181.184) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 24 Aug 2012 21:12:50 +0000 Received: from mail171-ch1-R.bigfish.com (10.43.68.240) by CH1EHSOBE016.bigfish.com (10.43.70.66) with Microsoft SMTP Server id 14.1.225.23; Fri, 24 Aug 2012 21:12:29 +0000 Received: from mail171-ch1 (localhost [127.0.0.1]) by mail171-ch1-R.bigfish.com (Postfix) with ESMTP id D85CA14005E for ; Fri, 24 Aug 2012 21:12:29 +0000 (UTC) X-Forefront-Antispam-Report: CIP:111.221.116.197;KIP:(null);UIP:(null);IPV:NLI;H:SIXPRD0610HT003.apcprd06.prod.outlook.com;RD:none;EFVD:NLI X-SpamScore: -5 X-BigFish: VPS-5(zz9371I542M1432I4015I9a6kzz1202hzz8275bh8275dhz2dh2a8h668h839h944he5bhf0ah107ah1155h) Received: from mail171-ch1 (localhost.localdomain [127.0.0.1]) by mail171-ch1 (MessageSwitch) id 1345842747609603_9436; Fri, 24 Aug 2012 21:12:27 +0000 (UTC) Received: from CH1EHSMHS005.bigfish.com (snatpool1.int.messaging.microsoft.com [10.43.68.240]) by mail171-ch1.bigfish.com (Postfix) with ESMTP id 81138A016C for ; Fri, 24 Aug 2012 21:12:27 +0000 (UTC) Received: from SIXPRD0610HT003.apcprd06.prod.outlook.com (111.221.116.197) by CH1EHSMHS005.bigfish.com (10.43.70.5) with Microsoft SMTP Server (TLS) id 14.1.225.23; Fri, 24 Aug 2012 21:12:26 +0000 Received: from SIXPRD0610MB370.apcprd06.prod.outlook.com ([169.254.8.51]) by SIXPRD0610HT003.apcprd06.prod.outlook.com ([10.255.24.166]) with mapi id 14.16.0190.008; Fri, 24 Aug 2012 21:12:22 +0000 From: James Kahn To: "cloudstack-users@incubator.apache.org" Subject: Re: CloudStack and XenServer 6.0.2 - stray snapshots on primary storage Thread-Topic: CloudStack and XenServer 6.0.2 - stray snapshots on primary storage Thread-Index: AQHNgRys8WWaqJxxvkq9s37APR9U7JdpH0aAgAEAaYA= Date: Fri, 24 Aug 2012 21:12:22 +0000 Message-ID: In-Reply-To: Accept-Language: en-AU, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.2.3.120616 x-originating-ip: [10.255.24.132] Content-Type: text/plain; charset="us-ascii" Content-ID: <4344125E54623E48AB4DD70A50C69904@apcprd06.prod.outlook.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: idea11.com.au X-Virus-Checked: Checked by ClamAV on apache.org Hi Anthony, Thanks for that explanation. I don't think that's what's happening here though. This is definitely occurring from CloudStack snapshots rather than provisioning. The 400GB disk is a data disk. It's very heavily used so it wouldn't surprise me if every block has been touched by the guest VM. It has a daily scheduled snapshot. I've run through this scenario with a test VM/disk. It only has a 50GB root disk. - Provision VM - creates root disk (e.g. ROOT-1234) - Snapshot disk - Snapshot process creates XS snapshot (GUID_ROOT-1234_timestampA) - Snapshot is copied to secondary storage - Snapshot operation ends, XS snapshot (GUID_ROOT-1234_timestampA) remains on primary storage Performing a second snapshot operation does the following: - Snapshot disk - Snapshot process creates XS snapshot (GUID_ROOT-1234_timestampA) - Snapshot is copied to secondary storage - Snapshot process deletes previous XS snapshot (GUID_ROOT-1234_timestampA) from primary storage. - Snapshot operation ends, XS snapshot (GUID_ROOT-1234_timestampA) remains on primary storage Subsequently, deleting both snapshots from CloudStack does not remove the stray snapshot from primary storage. It's now 36 hours after I ran this test and snapshot is still present on primary storage. Thanks, JK -----Original Message----- From: Anthony Xu Reply-To: "cloudstack-users@incubator.apache.org" Date: Saturday, 25 August 2012 2:06 AM To: "cloudstack-users@incubator.apache.org" Subject: RE: CloudStack and XenServer 6.0.2 - stray snapshots on primary storage > >Hi James, > >For root disk thin-provision, some snapshots are used as templates. > >After you create a VM1 on a template, the VHD chain looks like, > > (base disk) > / \ >(template) (disk for vm1) > >After you create a VM2 on the same template > (base disk) > / \ \ >(template) (disk for vm1) (disk for vm2) > > >This is only for root disk derived from template. In this way, CloudStack >can deploy VM fast, no full disk copy > >> so this is a real issue for us. On that volume a 400GB VDI consumes >> 800GB >> - 400GB for its base disk, and 400GB for the snapshot disk. > >Is a root disk? >What's the template size, Have you shrunk the template before uploading >to CloudStack?=20 >A shrunk VHD file has the size about space being used. > >-Anthony > > > > >> -----Original Message----- >> From: James Kahn [mailto:jkahn@idea11.com.au] >> Sent: Thursday, August 23, 2012 3:47 AM >> To: cloudstack-users@incubator.apache.org >> Subject: CloudStack and XenServer 6.0.2 - stray snapshots on primary >> storage >>=20 >> Stray CloudStack generated snapshots on primary storage are causing >> significant storage use on our XenServer environment. Is this expected >> behaviour, a bug, or are we encountering an environmental issue? Is >> anybody else seeing this? >>=20 >> One particular storage volume has over 1TB in use, with 659GB allocated >>=20 >> so this is a real issue for us. On that volume a 400GB VDI consumes >> 800GB >> - 400GB for its base disk, and 400GB for the snapshot disk. >>=20 >> Pretty much every primary storage volume is affected. Snapshots are >> exported successfully to secondary storage. >>=20 >> Some details on our environment: >> CloudStack 3.0.1 >> XenServer 6.0.2 >> iSCSI primary storage (CloudStack managed) >>=20 >> The snapshots also seem to be recently current, as shown: >>=20 >> # xe vdi-list sr-uuid=3D1ddf05ad-133e-a275-90de-8b03fb69d114 >> is-a-snapshot=3Dtrue params=3Duuid,name-label,snapshot-time >> uuid ( RO) : fb9210b9-25e5-46fd-a747-26e0dc536981 >> name-label ( RW): >> 034ef007-b6a5-40f0-81a0-6f59953a59eb_ROOT-1240_20120423023335 >> snapshot-time ( RO): 20120423T02:33:37Z >>=20 >>=20 >> uuid ( RO) : ea5392b0-8921-46ca-b74f-c16aa8e78466 >> name-label ( RW): Template routing-1 >> snapshot-time ( RO): 20120404T05:10:49Z >>=20 >>=20 >> uuid ( RO) : eba80a35-2acc-4228-905d-380a074135de >> name-label ( RW): >> 511f0f27-d130-4bf3-801d-3c2248efcfe0_DATA-1229_20120822180201 >> snapshot-time ( RO): 20120822T18:02:04Z >>=20 >>=20 >> uuid ( RO) : 420c397e-8828-4b80-88ff-1db141cc7d16 >> name-label ( RW): Template 98255702-1359-42ae-b635-ad7eacd09e5c >> snapshot-time ( RO): 20120411T23:28:35Z >>=20 >>=20 >> uuid ( RO) : b606c514-a042-4493-a0a7-07c7c5f66d3a >> name-label ( RW): >> 511f0f27-d130-4bf3-801d-3c2248efcfe0_ROOT-1229_20120822180201 >> snapshot-time ( RO): 20120822T18:02:21Z >>=20 >>=20 >> uuid ( RO) : 14a75d57-8e1b-4ee7-b1b8-d069362332e9 >> name-label ( RW): Template 5978eab4-166c-42f1-aeb6-a4d6bb8bb5f9 >> snapshot-time ( RO): 20120412T05:48:58Z >>=20 >>=20 >> uuid ( RO) : 90559f54-e35a-48e3-9ce0-5e9d8b4e5587 >> name-label ( RW): >> ff484c2e-2b8c-4c73-9b54-da404cfa962e_ROOT-1232_20120822150201 >> snapshot-time ( RO): 20120822T15:02:04Z >>=20 >>=20 >> Any ideas? >>=20 >> Thanks, >> JK. >>=20 >>=20 >>=20 > >