Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 35AF610F42 for ; Thu, 29 Jan 2015 07:25:49 +0000 (UTC) Received: (qmail 32082 invoked by uid 500); 29 Jan 2015 07:25:49 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 32030 invoked by uid 500); 29 Jan 2015 07:25:49 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 32018 invoked by uid 99); 29 Jan 2015 07:25:48 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Jan 2015 07:25:48 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of shadowsor@gmail.com designates 209.85.220.181 as permitted sender) Received: from [209.85.220.181] (HELO mail-vc0-f181.google.com) (209.85.220.181) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Jan 2015 07:25:22 +0000 Received: by mail-vc0-f181.google.com with SMTP id id10so8285477vcb.12 for ; Wed, 28 Jan 2015 23:24:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; bh=5a/Htw3chVZSanK3gNchLdp6OgYaTiPScx0NEFLh4Ew=; b=ZFBlMDgwjqucpLVGTOu6CxH5W/5Mj/yHb4PyE6oI0mUYbukQZaByy7WVYQSPrGwRF2 HNe79kZsmvgckDbxHw59vC5Cx3e6wJcjbjB7NYlmtsyPMVt002zuQ0NRwQsOdpND3pHu MXB381ylwA4vh3H8jdHMuTbdD0vIVhwoVIYcGAbSpU91CLr4IFrWJOO86Ovbdt4IFQYW +cM3d4yoDHeHuOAlxRvbtQmKl8YZ0hXruTFcBTTlz3hMaMJSI2m/zMIcZgoEl7zIKUQ1 6m/x7dGACrh0Y+YpuGXdcyOovezxyKV20uLFzwHcB8VB+HVKYyikH6NMW52bIRDu3Y9T lOkw== MIME-Version: 1.0 X-Received: by 10.220.97.201 with SMTP id m9mr3690790vcn.7.1422516275815; Wed, 28 Jan 2015 23:24:35 -0800 (PST) Received: by 10.52.15.202 with HTTP; Wed, 28 Jan 2015 23:24:35 -0800 (PST) In-Reply-To: References: Date: Thu, 29 Jan 2015 00:24:35 -0700 Message-ID: Subject: Re: Orphaned Data Disks From: Marcus To: "dev@cloudstack.apache.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org The trick is to not think of them as orphaned. The data disk is its own entity, it stands alone from the VM, and really the ability to create a data disk along with vm deployment is just a convenience, data disk lifecycle is to be created standalone, attached, detached, and deleted standalone. It's a difference between standard VM management and 'cloud' style 'instance' management. In AWS your VM is essentially recreated every time you reboot, and the only way to have persistent data is with a datadisk (or object store/separate db) that gets attached to your new VM each time. Luckily, CloudStack is a good bridge for that gap, being friendly to the more 'old school' VM management where instances are pets with things like persistent root disks, but built with cloud workloads in mind where you don't care about instances but still want persistent datastores that outlive the instance. That said, I do agree that it wouldn't be a big deal to add a parameter to the destroyVirtualMachine api call that would cause it to loop through all attached disks and remove them. You can open a feature request at https://issues.apache.org On Thu, Jan 29, 2015 at 12:01 AM, Michael Phillips wrote: > To me that's kind of strange to NOT assume to delete the disks that are a= ttached to an instance at time of deletion. I think in most real world env= ironments a data disk will belong to one machine only. Of course the except= ion is clustering and that is probably outside of this scope. Even if I was= going to do some kind of OS upgrade and wanted to reuse the data disk on a= nother instance I would probably detach, then reattach to the new instance.= Just seems like it can get messy quick, if a lot of users delete their ins= tances and leave all these orphaned data disks behind. > It would be awesome to have a selection box to have cloudstack delete ALL= attached disks when an instance is destroyed. Just my 2 cents... > >> Date: Wed, 28 Jan 2015 22:56:43 -0700 >> Subject: Re: Orphaned Data Disks >> From: shadowsor@gmail.com >> To: dev@cloudstack.apache.org >> >> Data disks are their own entity. You can detach them and attach them >> to other VMs. CloudStack doesn't assume that you want all the disks to >> die when you destroy a VM simply because they happen to be attached to >> that vm at the moment. >> >> On Wed, Jan 28, 2015 at 10:47 PM, Michael Phillips >> wrote: >> > What was the logic behind leaving the disks orphaned? >> > >> >> From: sanjeev.neelarapu@citrix.com >> >> To: dev@cloudstack.apache.org >> >> Subject: RE: Orphaned Data Disks >> >> Date: Thu, 29 Jan 2015 05:30:49 +0000 >> >> >> >> That is expected behavior. Right now there is no option to change it. >> >> >> >> Sanjeev, >> >> CloudPlatform Engineering, >> >> Citrix Systems, Inc. >> >> >> >> >> >> -----Original Message----- >> >> From: Michael Phillips [mailto:mphilli7823@hotmail.com] >> >> Sent: Thursday, January 29, 2015 10:43 AM >> >> To: dev@cloudstack.apache.org >> >> Subject: Orphaned Data Disks >> >> >> >> Has anyone noticed that after destroying and expunging an instance th= at has a data disk attached, cloudstack leaves the datadisk orphaned? If so= is this expected behavior, and if so is there an option to change it? >> > >