cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rafael Weingärtner <rafaelweingart...@gmail.com>
Subject Re: Squeeze another PR (#2398) in 4.11 milestone
Date Tue, 09 Jan 2018 21:40:04 GMT
Yes. That is actually what we do.

Looking at the code of "Xenserver625StorageProcessor.java
<https://github.com/apache/cloudstack/pull/2398/files#diff-6eeb1a2fb818cccb14785ee80c93a561>",
it feels that we were already doing this even before this PR #2398.
However, I might not be understanding the complete picture here...

On Tue, Jan 9, 2018 at 7:33 PM, Tutkowski, Mike <Mike.Tutkowski@netapp.com>
wrote:

> “technically we should only have "one" on primary storage at any given
> point in time”
>
> I just wanted to follow up on this one.
>
> When we are copying a delta from the previous snapshot, we should actually
> have two snapshots on primary storage for a time.
>
> If the delta copy is successful, then we delete the older snapshot. If the
> delta copy fails, then we delete the newest snapshot.
>
> Is that correct?
>
> > On Jan 9, 2018, at 11:36 AM, Khosrow Moossavi <kmoossavi@cloudops.com>
> wrote:
> >
> > "We are already deleting snapshots in the primary storage, but we always
> > leave behind the last one"
> >
> > This issue doesn't happen only when something fails. We are not deleting
> the
> > snapshots from primary storage (not on XenServer 6.25+ and not since Feb
> > 2017)
> >
> > The fix of this PR is:
> >
> > 1) when transferred successfully to secondary storage everything except
> > "this"
> > snapshot get removed (technically we should only have "one" on primary
> > storage
> > at any given point in time) [towards the end of try block]
> > 2) when transferring to secondary storage fails, only "this" in-progress
> > snapshot
> > gets deleted. [finally block]
> >
> >
> >
> > On Tue, Jan 9, 2018 at 1:01 PM, Rafael Weingärtner <
> > rafaelweingartner@gmail.com> wrote:
> >
> >> Khosrow, I have seen this issue as well. It happens when there are
> problems
> >> to transfer the snapshot from the primary to the secondary storage.
> >> However, we need to clarify one thing. We are already deleting
> snapshots in
> >> the primary storage, but we always leave behind the last one. The
> problem
> >> is that if an error happens, during the transfer of the VHD from the
> >> primary to the secondary storage. The failed snapshot VDI is left
> behind in
> >> primary storage (for XenServer). These failed snapshots can accumulate
> with
> >> time and cause the problem you described because XenServer will not be
> able
> >> to coalesce the VHD files of the VM. Therefore, what you are addressing
> in
> >> this PR are cases when an exception happens during the transfer from
> >> primary to secondary storage.
> >>
> >> On Tue, Jan 9, 2018 at 3:25 PM, Khosrow Moossavi <
> kmoossavi@cloudops.com>
> >> wrote:
> >>
> >>> Hi community
> >>>
> >>> We've found [1] and fixed [2] an issue in 4.10 regarding snapshots
> >>> remaining on primary storage (XenServer + Swift) which causes VDI chain
> >>> gets full after some time and user cannot take another snapshot.
> >>>
> >>> Please include this in 4.11 milestone if you see fit.
> >>>
> >>> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-10222
> >>> [2]: https://github.com/apache/cloudstack/pull/2398
> >>>
> >>> Thanks
> >>> Khosrow
> >>>
> >>
> >>
> >>
> >> --
> >> Rafael Weingärtner
> >>
>



-- 
Rafael Weingärtner

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message