Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6FBCDDEEB for ; Mon, 10 Sep 2012 17:49:42 +0000 (UTC) Received: (qmail 13193 invoked by uid 500); 10 Sep 2012 17:49:42 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 13169 invoked by uid 500); 10 Sep 2012 17:49:42 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 13161 invoked by uid 99); 10 Sep 2012 17:49:42 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 10 Sep 2012 17:49:42 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of shadowsor@gmail.com designates 74.125.82.43 as permitted sender) Received: from [74.125.82.43] (HELO mail-wg0-f43.google.com) (74.125.82.43) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 10 Sep 2012 17:49:37 +0000 Received: by wgbdr1 with SMTP id dr1so1543769wgb.0 for ; Mon, 10 Sep 2012 10:49:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=gs3R8pLwF0Cavc3EFPDj4qqtIrta5eJQJGdCuW086mQ=; b=Pf40t+R8sb9kW9+oEThe1vzU6pQgymT4+J7pojcPu6NXcAKCu5xP8YPWPE96F1rkUY cb72q+Ize2eQhrVJaDNjAEi4tyzhr2JGUgFe+MzOEK4liCiPBhZqaybwEThhUuOhy3wR tCOxvLmxvMi1CvYT+slmed+06Yb6bgIOV6qcOLP+E4ADbNp3rN6TMsG0u9pZU6h1zN9u MC3VG1NV3V1O73EqOjRYfR7kNFbsJS4S1wsA+sRz2a/qIi3+2rd2KB0TrdZ639d0Atfl YxVLu/5uWI66kerh8WDBVFgR5V/P9YNPY7VtVfk+P5ecCiIvBLGgTAKVKpV1hOZCGAtg LHYw== MIME-Version: 1.0 Received: by 10.180.97.33 with SMTP id dx1mr18679486wib.18.1347299355904; Mon, 10 Sep 2012 10:49:15 -0700 (PDT) Received: by 10.216.137.211 with HTTP; Mon, 10 Sep 2012 10:49:15 -0700 (PDT) In-Reply-To: References: Date: Mon, 10 Sep 2012 11:49:15 -0600 Message-ID: Subject: Re: cleaning up patch disks From: Marcus Sorensen To: cloudstack-dev@incubator.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org thanks, so this affects all hypervisors/storage backends that use the patch disk, or should I code my solution specific to KVM? On Mon, Sep 10, 2012 at 11:43 AM, Edison Su wrote: > > >> -----Original Message----- >> From: Marcus Sorensen [mailto:shadowsor@gmail.com] >> Sent: Sunday, September 09, 2012 9:32 PM >> To: cloudstack-dev@incubator.apache.org >> Subject: cleaning up patch disks >> >> I've got an issue with the CLVM on KVM support, it seems that the >> patch disks are created on the fly when a system VM is started. If I >> reboot a system VM 5 times I'll end up with 5 patch disks. I'm the one >> who submitted the CLVM patch, and I don't see that there's much >> difference between what we're doing with CLVM and what it does for >> everything else, so I thought I'd ask: >> >> Is this an issue for other backing stores as well (accumulating patch >> disks for system VMs)? If not where is it handled? > > > It's a bug, that patch disks are not cleaned up after system vm got stopped. > >> >> Any suggestions on how to go about fixing it? I see I could >> potentially hack into StopCommand, rebootVM/cleanupVM/stopVM, detect >> the patch disk and lvremove it, but then again if it doesn't go down >> on purpose (say a host crash) I'll still be leaking patch disks. >> >> Is it safe to assume that any patch disk that's not currently open is >> safe to delete (these are generated on the fly and not really tracked >> anywhere in the database, right?) > > If it's created on shared storage shared by multiple KVM hosts, then it's not easy to know, this patch disk is opened or not. > Normally, we can delete that patch disk for every stopcommand/stopvm/rebootvm/cleanupvm command. > If host is crashed, CS manager will send a command to other hosts in the cluster to clean up the VM, so we have the chance to clean up the patch disk anyway. > As you said in another mail, we can use the name schema: vm-name-patch-disk for patch disk. > Patch are welcome! >