From dev-return-110522-archive-asf-public=cust-asf.ponee.io@cloudstack.apache.org Tue Jan 9 22:40:10 2018 Return-Path: X-Original-To: archive-asf-public@eu.ponee.io Delivered-To: archive-asf-public@eu.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by mx-eu-01.ponee.io (Postfix) with ESMTP id A895F180718 for ; Tue, 9 Jan 2018 22:40:10 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 97EAF160C17; Tue, 9 Jan 2018 21:40:10 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id B63B8160C2D for ; Tue, 9 Jan 2018 22:40:09 +0100 (CET) Received: (qmail 43882 invoked by uid 500); 9 Jan 2018 21:40:08 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 43817 invoked by uid 99); 9 Jan 2018 21:40:08 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 09 Jan 2018 21:40:08 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id CC89F1A0E6B for ; Tue, 9 Jan 2018 21:40:07 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.898 X-Spam-Level: * X-Spam-Status: No, score=1.898 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id 4Sw_tErJe2fO for ; Tue, 9 Jan 2018 21:40:05 +0000 (UTC) Received: from mail-oi0-f47.google.com (mail-oi0-f47.google.com [209.85.218.47]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 9E0C55F39D for ; Tue, 9 Jan 2018 21:40:05 +0000 (UTC) Received: by mail-oi0-f47.google.com with SMTP id r63so11526298oia.6 for ; Tue, 09 Jan 2018 13:40:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=kkL2Ka1ZqCLngYvuLTnm27dYEKhlTIdJJN/6vmT37Zg=; b=aCdxQKlJ9045GunfrdXiI/WE7/LAw9a1V5lW/Q5FKasm7Bkv90gyld5VR264rkPh++ n00dyrHQRf1wAR+tlZgVwG+R4wX6jlcz4ADHUUTLsHWTFw3SD1L/K/ApA0eEKSQhi4aU ZLq3qh6ktWOw5ST/mg3NJeI7aCqnaACaaudMAjcqOv7rv/WgszebQ5oXf69C2zmaulKf Dtk5TgxcYuoDi1eHpBTpR2aK8jXvT9lM5lCZI3QFbb8xDuggfXCTZQi3yzv2UBaOa8GC 9xTis6lBepqAoutQkLdPmK4swFV2eP1X3kKu5eTTi3FMH4McwiMILoVcGHrzrpJ0NUMV NXaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=kkL2Ka1ZqCLngYvuLTnm27dYEKhlTIdJJN/6vmT37Zg=; b=ljuJVah2EyA7wWJwsnAhOSPrGn3YAJp9zRahR22q3NZAp3+roWtr1ac0RtY9kbR518 558Ke68aBJF3JjA7cC1qSnCFk7N5GAxEIjGr1k0swUa/hvXKwDNLv1upS9NOsWzoOSFq G7sKtDP4OGfD35tKobZALF7nMkIeGE9pUxUyvwJog7kF9Nh6g0DMuInStDQ74Cgd5IVH A4fVATJarSuP62rBQKpb/OwlsbCv6da4Mx1+Fx1qXhyK4s0myW2EKE6eLk08XQAGMrLe NQCKo3i/M34ekrVcDFPNgKaIGf8aTwgd2KhdWpjxqIFT2MkFtiie4bhmR+ZTrc0GOsig 0t1Q== X-Gm-Message-State: AKwxytegFTGnT7J3VOlU7IDxRf1coh+fIXaOeBncQXv+DirvGRXuBHWX Yt9uDUorgaaC2WIDnGhrsUgXZPDt9+kkmw1Ctd3uH5vJ X-Google-Smtp-Source: ACJfBoszKK0i01/0aaXFQOajAyxT333kw+d9EZfuTtylrRRfiY9fWGupzHcoGOrW6UCqbjQC0kNIpLgrW72ezAXNMME= X-Received: by 10.202.76.209 with SMTP id z200mr6012390oia.188.1515534004667; Tue, 09 Jan 2018 13:40:04 -0800 (PST) MIME-Version: 1.0 Received: by 10.157.9.41 with HTTP; Tue, 9 Jan 2018 13:40:04 -0800 (PST) In-Reply-To: References: From: =?UTF-8?Q?Rafael_Weing=C3=A4rtner?= Date: Tue, 9 Jan 2018 19:40:04 -0200 Message-ID: Subject: Re: Squeeze another PR (#2398) in 4.11 milestone To: dev Content-Type: multipart/alternative; boundary="001a1134e1bcfe4c4505625ec1eb" --001a1134e1bcfe4c4505625ec1eb Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Yes. That is actually what we do. Looking at the code of "Xenserver625StorageProcessor.java ", it feels that we were already doing this even before this PR #2398. However, I might not be understanding the complete picture here... On Tue, Jan 9, 2018 at 7:33 PM, Tutkowski, Mike wrote: > =E2=80=9Ctechnically we should only have "one" on primary storage at any = given > point in time=E2=80=9D > > I just wanted to follow up on this one. > > When we are copying a delta from the previous snapshot, we should actuall= y > have two snapshots on primary storage for a time. > > If the delta copy is successful, then we delete the older snapshot. If th= e > delta copy fails, then we delete the newest snapshot. > > Is that correct? > > > On Jan 9, 2018, at 11:36 AM, Khosrow Moossavi > wrote: > > > > "We are already deleting snapshots in the primary storage, but we alway= s > > leave behind the last one" > > > > This issue doesn't happen only when something fails. We are not deletin= g > the > > snapshots from primary storage (not on XenServer 6.25+ and not since Fe= b > > 2017) > > > > The fix of this PR is: > > > > 1) when transferred successfully to secondary storage everything except > > "this" > > snapshot get removed (technically we should only have "one" on primary > > storage > > at any given point in time) [towards the end of try block] > > 2) when transferring to secondary storage fails, only "this" in-progres= s > > snapshot > > gets deleted. [finally block] > > > > > > > > On Tue, Jan 9, 2018 at 1:01 PM, Rafael Weing=C3=A4rtner < > > rafaelweingartner@gmail.com> wrote: > > > >> Khosrow, I have seen this issue as well. It happens when there are > problems > >> to transfer the snapshot from the primary to the secondary storage. > >> However, we need to clarify one thing. We are already deleting > snapshots in > >> the primary storage, but we always leave behind the last one. The > problem > >> is that if an error happens, during the transfer of the VHD from the > >> primary to the secondary storage. The failed snapshot VDI is left > behind in > >> primary storage (for XenServer). These failed snapshots can accumulate > with > >> time and cause the problem you described because XenServer will not be > able > >> to coalesce the VHD files of the VM. Therefore, what you are addressin= g > in > >> this PR are cases when an exception happens during the transfer from > >> primary to secondary storage. > >> > >> On Tue, Jan 9, 2018 at 3:25 PM, Khosrow Moossavi < > kmoossavi@cloudops.com> > >> wrote: > >> > >>> Hi community > >>> > >>> We've found [1] and fixed [2] an issue in 4.10 regarding snapshots > >>> remaining on primary storage (XenServer + Swift) which causes VDI cha= in > >>> gets full after some time and user cannot take another snapshot. > >>> > >>> Please include this in 4.11 milestone if you see fit. > >>> > >>> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-10222 > >>> [2]: https://github.com/apache/cloudstack/pull/2398 > >>> > >>> Thanks > >>> Khosrow > >>> > >> > >> > >> > >> -- > >> Rafael Weing=C3=A4rtner > >> > --=20 Rafael Weing=C3=A4rtner --001a1134e1bcfe4c4505625ec1eb--