Return-Path: X-Original-To: apmail-cloudstack-issues-archive@www.apache.org Delivered-To: apmail-cloudstack-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 97F5D10D03 for ; Tue, 17 Sep 2013 13:03:53 +0000 (UTC) Received: (qmail 39183 invoked by uid 500); 17 Sep 2013 13:03:53 -0000 Delivered-To: apmail-cloudstack-issues-archive@cloudstack.apache.org Received: (qmail 39114 invoked by uid 500); 17 Sep 2013 13:03:52 -0000 Mailing-List: contact issues-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list issues@cloudstack.apache.org Received: (qmail 39097 invoked by uid 500); 17 Sep 2013 13:03:52 -0000 Delivered-To: apmail-incubator-cloudstack-issues@incubator.apache.org Received: (qmail 39090 invoked by uid 99); 17 Sep 2013 13:03:51 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Sep 2013 13:03:51 +0000 Date: Tue, 17 Sep 2013 13:03:51 +0000 (UTC) From: "Marcus Sorensen (JIRA)" To: cloudstack-issues@incubator.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (CLOUDSTACK-3565) Restarting libvirtd service leading to destroy storage pool MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CLOUDSTACK-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769484#comment-13769484 ] Marcus Sorensen commented on CLOUDSTACK-3565: --------------------------------------------- Since this is dependent on specific versions, I'm not clear on whether there was any difference between persistent and non persistent pools. It was framed as though it was the cloudstack change that broke things, but perhaps it was the libvirt update. If an NFS mount point is in use when a *defined* NFS pool is started with an affected version, does it also fail? I'm just trying to consider the case where we go back to persistently defined storage and whether or not that will fix the issue. I will know when I get a broken one to try (i will attempt stock centos next time). Either way, we could perhaps assume that whether it works or not for persistent, if someone on the libvirt dev team saw fit to check in the pool.create() case, they could likely add the same in starting persistent pools in the future. Just brainstorming, it seems we can handle the libvirt error of 'mount point in use' somehow. One idea as mentioned is to 'umount -l', which should keep existing stuff working and allow the pool to be recreated, but just seems messy/hackish to me. I'd be OK with it though because I think it would make things 'just work' for the end user. Another option to consider might be to handle the error by switching to a dir based pool like local uses, when the already mounted error is caught. I don't think that will care if something is mounted, and should also allow things to just work. The downsides are that the NFS mount won't clean up when the pool is removed (small problem compared to now) and it may cause confusion if someone goes inspecting their pool details. > Restarting libvirtd service leading to destroy storage pool > ----------------------------------------------------------- > > Key: CLOUDSTACK-3565 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3565 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the default.) > Components: KVM > Affects Versions: 4.2.0 > Environment: KVM > Branch 4.2 > Reporter: Rayees Namathponnan > Assignee: Marcus Sorensen > Priority: Blocker > Labels: documentation > Fix For: 4.2.0 > > > Steps to reproduce > Step 1 : Create cloudstack step in kvm > Step 2 : From kvm host check "virsh pool-list" > Step 3: Stop and start libvirtd service > Step 4 : Check "virsh pool-list" > Actual Result > "virsh pool-list" is blank after restart libvird service > [root@Rack2Host12 agent]# virsh pool-list > Name State Autostart > ----------------------------------------- > 41b632b5-40b3-3024-a38b-ea259c72579f active no > 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active no > fff90cb5-06dd-33b3-8815-d78c08ca01d9 active no > [root@Rack2Host12 agent]# service cloudstack-agent stop > Stopping Cloud Agent: > [root@Rack2Host12 agent]# virsh pool-list > Name State Autostart > ----------------------------------------- > 41b632b5-40b3-3024-a38b-ea259c72579f active no > 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active no > fff90cb5-06dd-33b3-8815-d78c08ca01d9 active no > [root@Rack2Host12 agent]# virsh list > Id Name State > ---------------------------------------------------- > [root@Rack2Host12 agent]# service libvirtd stop > Stopping libvirtd daemon: [ OK ] > [root@Rack2Host12 agent]# service libvirtd start > Starting libvirtd daemon: [ OK ] > [root@Rack2Host12 agent]# virsh pool-list > Name State Autostart > ----------------------------------------- -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira